Why is everyone buying IGN's BS about "HD 4000 series card?" Those of us familiar with IGN know that "talking out their rear end" is about their only talent. What GPU the Wii2 uses is entirely up in the air: we can only be relatively certain of a few facts related to the hardware:
- The GPU will likely be by AMD/ATi. Nintendo's apparently been quite happy with them, and currently for higher-powered applications, GPUs have a duopoly of AMD and nVidia.
- The CPU will almost certainly use the modern "stream processor" design. It's an open question of whether it will choose vector cores (like nVidia's modern cards and the 360's Xenos) or take superscalar cores like AMD's modern PC CPUs. Then again, as plenty of benchmarks have shown us here, both designs can be very competitive; AMD and nVidia have been trading blows on this for almost 5 years now.
- The VRAM will PROBABLY be GDDR5. Nintendo has always shown a fondness for higher-end RAM, favoring speed even when they went with low quantity; the N64 was an early-adopter of RDRAM, the Game Cube, DSi, and 3DS used proprietary, high-speed pseudo-static RAM, and the Wii, in spite of its weaknesses overall, went with the SAME speed of GDDR3 (1.4 GHz) used in the PS3 and 360. Given this long-running trend, Nintendo will very likely GDDR5.
- The Wii2 will be pretty much gauranteed handle all games natively at 1080p, a 125% improvement over the 720p-minimum of the PS3, and 251% improved over the 576p seen on some 360 games. This is largely thanks to Moore's law. This alone would clearly leave the Wii2 on solid footing as way above the other two consoles.
- The CPU will almost certainly be PowerPC-based. In the modern age, (and thanks to the prevalence of DLC games) backwards-compatability is important. Nintendo will need to ensure that at the very least, old WiiWare and VC games run on the Wii2 out-of-the-box; the latter's easy, since they simply are emulated versions of the original ROMs, but the former will all require ports unless they keep a BC architecture. Moving up to a modern PPC 970FX-based design like the 360 and PS3 (regardless of the number and type of cores) would allow this to be skipped. (and also ensure hardware BC with Wii and GC games)
Beyond that, we know some basic "duh" things (like it'll have online support in some way, use some form of DVD-related optical media, have motion-sensing controllers) as well as a few things we know are currently BS rumors:
- The controllers will likely NOT have touchscreens: people falsely predicted that for the Wii, even though almost EVERYONE was claiming that it was "right from a source at Nintendo." All IGN has to back up the claim of a 6.2-inch screen is a badly photoshopped image, that'd make the controller far bigger than an Xbox classic controller.
- We DON'T know a name yet. No one knows the name of a Nintendo console until they announce it. Like usual, when they announce it, it'll likely be something NO ONE really thought. "Ultra Nintendo?" Nintendo64. "Nintendo128?" Game Cube. "all the various names for Project Revolution?" Wii. (remember what a joke that was among the peanut gallery at first?) So for now, let's just call it "Wii2" or "Café" so people know what we're talking about, and we can feel comfortable in assuming that won't be the real name anyway.
- The console will almost certainly NOT use an AMD CPU. Why's that? PC CPUs use lots of silicon to be better at multi-tasking and general-purpose computing, things that are basically wasted on a console, which instead will limit multitasking to passive online services that will only need a tiny, weak co-processor, and just one potent math-heavy CPU to handle gaming and media playback.
[citation][nom]Th-z[/nom]It's not just the overheads from OS, API and graphic driver, on consoles, performance-critical areas can be coded at low level if devs want to, something PC doesn't have the luxury.[/citation]
This clearly illustrates that you don't know a thing about programming, and don't know what you're talking about. You're 100% incorrect here, and are even using the wrong terms.
You can ALWAYS write code for PCs in assembly language. Most decent-performing software relies heavily on direct assembly in order to get the best performance. You hear stuff about SSE and the like? SSE is a subset of assembly instructions. A program can't get good benefit out of them by simply compiling high-level code. To truly see its benefit, one has to write the code in assembly, to specific SSE instructions.
It's a core reason why many hasty "we'll port this 360 game to the PC" games run so poorly: it's because on the 360, they make proper use of the vector units in the CPU; standard 4-wide vector units provide up to 400% of the FLOPS compared to a traditional floating-point unit. In PowerPC CPUs (used in the consoles) the vector unit is discrete, separate from the normal, higher-precision FPU. in a PC, one really HAS to use assembly, because they use floating-point units that can act as EITHER a super-high-precision (160-bit) FPU capable of only handling one set of operands at a time, or as a 4x32-bit vector unit.
So yes, your thinking is purely wrong here: one can write low-level code for PCs and extract maximum efficiency. After all, assembly programming started on PCs first, and was originally the easiest option before high-level languages were invented.
[citation][nom]Twist3d1080[/nom]The PS3 has the equivalent to a Geforce 7900. Although still ancient, its not epic failure that the Geforce FX 5k series was.[/citation]
Not QUITE a 7900. It's hacked down a little: half the ROPs, half the memory interface. The former was in response to the latter, as with the restricted memory bandwidth, there wasn't enough to go around for more than 8 pipelines.
As for the original Xbox's GPU, it used again a GPU only slightly modified: in that case, it was a GeForce 3, altered to have 2 T&L units instead of 1. Some might take that and confuse it to mean that it was a GeForce 4 Ti, though the Xbox's GPU lacked some subtle changes the 4 Ti had, as well as support for newer versions of DirectX and their accompanying features.
[citation][nom]silverblue[/nom]The Dreamcast also used a tile-based deferred renderer which reduced a lot of the load as well as the system cost; if overdraw is sufficiently high, this yields a tangible benefit, and you can utilise slower RAM and a lower GPU clock. Compared to the PS2, it also had twice the available on-screen texture memory, and more available sound channels (if that matters to some). Yes, it was the weakest console (it lacked T&L along with features such as 5.1 surround sound, a DVD drive, etc.), but it was the cheapest to produce and the first to become available, and out of that particular generation, the first to properly bring online gaming to the masses. Let's also not forget that it was easily the most innovative of the sixth generation machines.Sega's past mistakes came to haunt them, and the Dreamcast was killed off well before its time.[/citation]
No, you're slightly mistaken on the benefits of tile-based rendering over brute-force rendering. To go over a few things:
- Yes, it DOES reduce bandwidth penalties for overdraw. That is the main benefit it has.
- It does NOT let you simply clock the CPU lower: it won't make up for outright lack of texel throughput, (or even pixel throughput in most situations) and this is where the Dreamcast having 1/4 the number of pipelines of any of its competitors comes to hurt it.
- In the bulk of cases in Dreamcast and other 6th-generation games, overdraw wasn't particularly high: higher overdraw required more geometric detail or objects, and the CPU/T&L units of 6th-generation consoles weren't consistently capable of handling all this to bring us consistent massive overdraw. (the Xbox came close, though)
- As an aside, it's worth noting that the Xbox 360 has a very similar tile-based deferred renderer, making use of a heft 10MB EDRAM tile buffer. However, in spite of this, it doesn't seem to grant the 360 particular benefits over the PS3: indeed, the 360 typically runs games at lower resolution than the PS3 to compensate for using both HDR and AA simultaneously, while PS3 games (due to G70/RSX design limitations) could only use one or the other.
Remember that the Dreamcast has a rather feeble CPU, again: clocked to 200 MHz, and with a single 128-bit 4-wide vector unit, (which is where Sega got the silly "128-bit" claim) that has to be used for T&L. All competing CPUs clocked higher, and excepting the PS2 (which had a PAIR of vector units to make up for it) also had hardware T&L. (the GPU-based T&L is what allowed the GC to get away with only a 2x32-bit vector FP unit) Geometric detail, along with complex handling of large numbers of objects, would thus be a critical weakness of the Dreamcast compared to the other three consoles. And without this, overdraw situations would find it bottlenecked by the CPU well before any benefits of the tile renderer would typically show.
I will agree that the Dreamcast didn't get its full chance... But that was largely due to Sega, which had a history of botching launches and handling things. It DID have an impressive lineup shortly after launch, (Shenmue, anyone?) even if, as before, it was weak outside of the STG and Fighting genres. Mostly, it was Sega's decision to give up when it was clear that the PS2 would pass it is what doomed it entirely... Otherwise it likely would've taken a solid 2nd place, leaving the Xbox and GC in a more distant 3rd and 4th. (and 3rd-party sappage could've in fact doomed the GC) The Dreamcast was definitely the weakest console, but that's not what doomed it in the slightest. (after all, even over the next few years, there's no way the 360 or PS3 will catch up to the Wii)
The real problem rested within the minds at the head of Sega. The design reflected part of this: Sega never DID quite understand how to engineer a console; Nintendo did. Sega's a maker of arcade booths, and their strategy had been to take an arcade machine and scale it down to an affordable home console: this resulted in the market misses that the Master System, Saturn, and Dreamcast were. (the Genesis was the one exception, and it showed, giving the SNES a very cuthroat run for its money)
But even with the Genesis, some of their marketting showed Sega's main psychological weakneses: they were too intent on their release combating the OUTGOING generation. Remember, "Genesis does what Nintendon't?" It was advertising the advantages the 4th-generation Genesis had over the 3rd-generation NES. This kept up and was reflected in later systems, especially the Dreamcast: the design choices scream of a system designed primarily to trump the N64, rather than looking ahead. Nintendo (and later Sony) avoided this mistake by recognizing the cyclical, generation-based nature of the console wars. This sort of short-sightedness in design, coupled with