[citation][nom]alidan[/nom]with sega, there is also the factor that they did make better systems like the saturn and such, but never thought of the people who developed for it, it was like the early cell days where everyone complained, but because of how sucessfull they were, people just got use to it. and also with the saturn, if im correct, they went and made it for arcade ports, something that was next to impossible on a nintendo system, and was sluggish even on a ps1,[/citation]
The Saturn wasn't truly the superior console of its generation. It WAS more potent than the PS1, provided one was able/willing to make use of all the hardware together. However, this, as I understand, required writing primarily in assembly, which for most is vastly more of a PITA than using high-level languages such as, say, C or C++, which were starting to be far more popular for the PS1, and one of the main factors that contributed to it having such a massive game library.
You do highlight a chief issue with design that plagued ALL of Sega's consoles, with the Genesis being largely unaffected yet still falling to it: that's the fact that Sega approached consoles as simply being cheaper arcade machines, a common tactic for Japanese console devs that also resulted in other failed consoles like the Neo-Geo. When SNK took the route to put full-fledged arcade hardware in their machine, it resulted in a price so prohibitively high few bought it. By contrast, Sega often wound up with too gimped a machine to compete. Until Sony took a few iterations to get the Playstation right, Nintendo was really the only maker to get the idea down correctly, to engineer a console as a unique segment.
Saturn managed to get what WOULD'VE been a decent balance, but had the tradeoff of being too unwieldy to develop for, sporting 9 different processors when 3 (typically a CPU, VPU, and a combination audio & I/O chip) was the typical number for a console. The other part that killed both it and the Dreamcast was Sega's notorious skills for shooting themselves in the foot with fickle announcements, leaving anyone wary about buying or developing for a Sega machine lest Sega decide to announce something else shortly thereafter. This spurned a LOT of people, and resulted in the thing getting an embarassingly small games library: mostly ports from otherwise-compatible SuperH arcade systems. It also didn't help that a lot of next-gen features (such as audio compression) were left out on the intent focus to "Beat the SNES."
The Dreamcast was the worst offender for failing to look forward, though. As I mentioned, the specs all literally screamed that Sega was not paying attention to the 6th generation, and focused solely on looking more powerful than the Nintendo64. As a result, it wound up closer to the N64 in terms of power than any of the actual 6th-gen consoles. It wound up being the only 6th-gen console, for instance, that lacked having dedicated hardware T&L, had a vastly weaker CPU, (the DC's ran at 200 MHz... Vs. 486 MHz for the GCN, or 733 MHz for the Xbox? And the PS2 had multiple SIMD units to compensate for its mere 295 MHz clock) and way, WAY less main memory bandwidth: its 0.8 GB/sec was closer to the N64's 0.56 GB/sec than the 2.56, 3.2, and 6.4 GB/sec of the Game Cube, PS2, and Xbox, respectively. The results speak for themselves: once you add in the N64 and Dreamcast, instead of seeing a distinct jump between 5th- and 6-th generation consoles, instead you have a more smooth transition between the PS1 and Sega Saturn and the PS2/GCN/Xbox.
[citation][nom]alidan[/nom]but as for console graphics, they are able to pull more out, and i have always wondered just by how much. yea, ini tweaks can help, but base code, on a console exclusive, will be more efficient than lets say on a pc exclusive, because that pc has to take into account x cpus and x gpus where consoles only need 1. i believe that with the wiiu, they will have enough of a performance boost to being console only, that when they use tessellation, it will have a very minimal effect on the overall gameplay, weather that is at 30 or 60fps doesn't matter to much.[/citation]
When it comes to graphics performance, DirectX takes care of all those considerations on the PC. Due to the comparatively strict unification in GPU designs, all GPU programming on the PC is very efficient. The only "multiple GPUs" most devs need to consider are the differences in DirectX version if they opt to have more than one. (typically now, it's a DirectX 9.0b base version, with either a DX 10 or 11 mode for extra effects) SOME devs do go a bit further and have a different render path for ATi and nVidia GPUs to account for the differences, but it's not critical, and doesn't make an absolutely huge difference: at most it's maybe +/- 10-20% or so, or in other words, about the same as the GPU power difference between, say, the Xbox 360 and PS3.
As for the Wii U, it will still be much more potent than the Xbox 360 or PS3. Given Nintendo's stance on packing a bit more power, (but still not going high-end) I'm predicting the main memory will be 1GB (when 2-4GB would be preferred for an 8th-gen console, really) of GDDR5 at 4-5 GHz, which would be several times the capability of the PS3's or 360's memory subsystems without even including the separate framebuffer and texture cache the Wii U will certainly have. Of course, Nintendo may surprise everyone and pass that mark: springing for 2+ GB of XDR2 @8 GHz isn't entirely unreasonable for a console, and would be an even more massive leap. I'd give Nintendo 30/70% odds of picking XDR2/GDDR5; I'm so bullish on the XDR2 option given Nintendo's consistent preference for taking the fastest memory chips available (SRAM, RDRAM, then EDRAM, and most recently GDDR3) for the highest bandwidth, even when they rarely have enough total capacity.
[citation][nom]aqi[/nom]This poster is correct, however console developers don't have to deal with the direct-x abstraction layer that they do on pc's which inherently introduces a performance trade off (due to not being able to directly address the hardware) but provides a level of standardisation and compatibility that is needed due to the sheer plethora of hardware configurations available on pcs.[/citation]
Actually, you're a bit mistaken: BOTH the PS3 and Xbox 360 use a hardware abstraction layer for addressing the hardware. In the Xbox 360, it is, in fact, DirectX; it's a mish-mash that ROUGHLY equates to DirectX 9.0c, though it has some spots where it slightly exceeds spec, (taking it partway between DX9 and DX10) and others where it only meets regular DirectX 9 spec. (case-in-point: as a general rule, most DX9 shaders written for PC games cane be used without modification on the 360, and vice-versa) The PS3, (and the Wii, I believe) use OpenGL instead.
Directly addressing the hardware requires writing in assembly. This started falling out of favor in the 5th (aka "32-bit") generation. (in the 4th and before, ASM was essentially the only option, especially given that neither the SNES or Sega Genesis supported virtual memory, requiring each program to precisely label its addressing to ensure no routine would write over another's RAM)