It's Real: Nintendo Confirms New Console 2012

Page 3 - Seeking answers? Join the Tom's Guide community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

dark_lord69

Distinguished
Jun 6, 2006
740
0
19,010
[citation][nom]mister g[/nom]R700? Honestly i thought they would use something newer than that since really that's almost two generations old.[/citation]
True, but I still play the newest games in HD with that chipset. I've got a HD 4870 and I don't plan on getting a new card for at least a couple years. (Unless Diablo 3 is unplayable on my 4870 but I doubt that.)

I think what a lot of you are forgetting is that you can't make a direct compairison of a PC vs a console. A PC is used to do many things and has many things that make in in-efficient, like running windows for example. Consoles don't need as much memory and can actually do more with less hardware. (This has always been true.)
 

kinggraves

Distinguished
May 14, 2010
445
0
18,940
It really baffles me that so many people seem to think that consoles are "holding things back", as if they have the developers locked in a warehouse somewhere. Some of you young pups might not realize, but back in the days developers would make two versions of a game, one for consoles and one for PC. That way, both could be fully utilized to their purpose. The only thing that prevents them from doing that now is that developers are lazy and don't want to put in the effort. Console manufacturers have NO responsibility whatsoever for the decisions that developers are making to release console ports to PCs. Place the blame where it belongs, because the only thing holding developers back is themselves.
 

twist3d1080

Distinguished
Mar 20, 2010
12
0
18,560
[citation][nom]Th-z[/nom]Don't underestimate console hardware, most people don't realize how much performance they can extract from a console. It's not just the overheads from OS, API and graphic driver, on consoles, performance-critical areas can be coded at low level if devs want to, something PC doesn't have the luxury. PC always has to be more powerful than consoles in order to produce something equal. Expect it to outperform PC's R700 equivalent.[/citation]

+1 You will always get better performance by programming direct to metal. I'm amazed how many tech enthusiasts have little to no knowledge of how there PC or console really works. This product will blow a PS3 or a Xbox 360 away in the terms of graphical performance. Plus AMD's VLIW5 architecture is mature and well known by now. Also you can expect that the CPU on this device will be at a smaller process or so you know that it will have a higher clock than a Xbox 360 cpu.
 

twist3d1080

Distinguished
Mar 20, 2010
12
0
18,560
[citation][nom]kartu[/nom]Get a clue, please. Xbox has a card inferior to nvidia 5700. Ok? Get it? Inferior to nvidia 5700. Now for PC to have crappier image it has to be a lolport of console game to PC, or you have to spend most of the budget on CPU, instead of graphic card.In other words: BOTH PS3 AND XBOX360 ARE FAR FAR FAR BEHIND of what is rumored to be in Wii2. They are FARbehind of 3 year old "ok" gaming PC as well.[/citation]

The PS3 has the equivalent to a Geforce 7900. Although still ancient, its not epic failure that the Geforce FX 5k series was.
 

silverblue

Distinguished
Jul 22, 2009
232
0
18,830
[citation][nom]nottheking[/nom]Naw, hardware-wise, the Dreamcast was exemplary of some of Sega's problems that drove them out of the industry: they tried an arcade-type architecture, but scaled it down to make it affordable. I'm sure a serious fan like you would remember that, adjusted for inflation, the Dreamcast was possible the cheapest console ever at release?Overall, this approach, the opposite of what SNK took with their ill-fated Neo Geo, showed its flaws: the Dreamcast WAS the weakest: it had the least RAM, (26MB total, compared to 40+ on all the competitors) the weakest processor (the SuperH-4 didn't have more than a single 4-wide vector processor and only ran at a pitiful 200MHz) the least RAM bandwidth, and a GPU that was clocked far slower, and more resembled the N64's Reality Co-Processor in that it had but a single pixel pipeline with two TMUs, versus the four pipelines on its competitors. (the N64 ALSO had a single pipeline with two TMUs... And ran at almost 2/3 the clock rate)[/citation]

The Dreamcast also used a tile-based deferred renderer which reduced a lot of the load as well as the system cost; if overdraw is sufficiently high, this yields a tangible benefit, and you can utilise slower RAM and a lower GPU clock. Compared to the PS2, it also had twice the available on-screen texture memory, and more available sound channels (if that matters to some). Yes, it was the weakest console (it lacked T&L along with features such as 5.1 surround sound, a DVD drive, etc.), but it was the cheapest to produce and the first to become available, and out of that particular generation, the first to properly bring online gaming to the masses. Let's also not forget that it was easily the most innovative of the sixth generation machines.

Sega's past mistakes came to haunt them, and the Dreamcast was killed off well before its time.
 

bildo123

Distinguished
Feb 6, 2007
205
0
18,830
[citation][nom]jednx01[/nom]I wouldn't count sony and microsoft out for the count just yet. They could still come out with new systems. It's really about time for them to do so. They could make a system that is a ton better than the current ones. Can you imagine how awesome it would be if they had a xbox 720 (or whatever they are going to call it) with a gtx 580 or 570 or a amd 6950 or 6970 type card. In reality, I doubt that they would spend that much on a card for the new systems, but if they did, it would be incredible.[/citation]

I'd absolutely love to see the power adapter of a console running a GTX 580...that would be the true meaning of a power brick...not to mention the F16 decibel like cooling system.
 

nebun

Distinguished
Oct 20, 2008
1,160
0
19,240
[citation][nom]bildo123[/nom]I'd absolutely love to see the power adapter of a console running a GTX 580...that would be the true meaning of a power brick...not to mention the F16 decibel like cooling system.[/citation]
many cheap people would not pay for new tech...it's very expensive :(
 

zak_mckraken

Distinguished
Jan 16, 2004
868
0
18,930
[citation][nom]NapoleonDK[/nom]Sweet. Lets get the rumor mill churning! Gimme HD, digital distribution, a social environment like Xbox Live/Kinect, some comfy and not gimmicky controllers, and the Firefox Babe.[/citation]
Seconded. Although, I would settle for only the Firefox Babe.
 

thegreathuntingdolphin

Distinguished
Nov 13, 2009
133
0
18,630
I'm going to be very disappointed if they come out with it and have a graphics card that is similar to the one on the 360. They could do SOOOOO much better now. Why would they try to match the 360? That is really really really old tech now. Video card tech is insanely better than when they made the 360. Take one of my AMD 6950s and you have more graphics horsepower than an entire stack of xbox 360s...

Well, I think it is pretty obvious that the Wii2 will be much more powerful than the 360.You are talking about something probably on the higher end of the of the ATI 4xxx series versus something in the ATI 19xx series. Also, it is likely that Nintendo will be using a newer PowerPC cpu which is substantially more powerful than the Xbox PowerPC so even if the speed is similar the architecture will be much newer.

I am surprised Nintendo is only using the 4xxx series but they do mentioned they have revamped it so it might be pretty powerful.
 

nottheking

Distinguished
Jan 5, 2006
311
0
18,930
Why is everyone buying IGN's BS about "HD 4000 series card?" Those of us familiar with IGN know that "talking out their rear end" is about their only talent. What GPU the Wii2 uses is entirely up in the air: we can only be relatively certain of a few facts related to the hardware:

- The GPU will likely be by AMD/ATi. Nintendo's apparently been quite happy with them, and currently for higher-powered applications, GPUs have a duopoly of AMD and nVidia.
- The CPU will almost certainly use the modern "stream processor" design. It's an open question of whether it will choose vector cores (like nVidia's modern cards and the 360's Xenos) or take superscalar cores like AMD's modern PC CPUs. Then again, as plenty of benchmarks have shown us here, both designs can be very competitive; AMD and nVidia have been trading blows on this for almost 5 years now.
- The VRAM will PROBABLY be GDDR5. Nintendo has always shown a fondness for higher-end RAM, favoring speed even when they went with low quantity; the N64 was an early-adopter of RDRAM, the Game Cube, DSi, and 3DS used proprietary, high-speed pseudo-static RAM, and the Wii, in spite of its weaknesses overall, went with the SAME speed of GDDR3 (1.4 GHz) used in the PS3 and 360. Given this long-running trend, Nintendo will very likely GDDR5.
- The Wii2 will be pretty much gauranteed handle all games natively at 1080p, a 125% improvement over the 720p-minimum of the PS3, and 251% improved over the 576p seen on some 360 games. This is largely thanks to Moore's law. This alone would clearly leave the Wii2 on solid footing as way above the other two consoles.
- The CPU will almost certainly be PowerPC-based. In the modern age, (and thanks to the prevalence of DLC games) backwards-compatability is important. Nintendo will need to ensure that at the very least, old WiiWare and VC games run on the Wii2 out-of-the-box; the latter's easy, since they simply are emulated versions of the original ROMs, but the former will all require ports unless they keep a BC architecture. Moving up to a modern PPC 970FX-based design like the 360 and PS3 (regardless of the number and type of cores) would allow this to be skipped. (and also ensure hardware BC with Wii and GC games)

Beyond that, we know some basic "duh" things (like it'll have online support in some way, use some form of DVD-related optical media, have motion-sensing controllers) as well as a few things we know are currently BS rumors:

- The controllers will likely NOT have touchscreens: people falsely predicted that for the Wii, even though almost EVERYONE was claiming that it was "right from a source at Nintendo." All IGN has to back up the claim of a 6.2-inch screen is a badly photoshopped image, that'd make the controller far bigger than an Xbox classic controller.
- We DON'T know a name yet. No one knows the name of a Nintendo console until they announce it. Like usual, when they announce it, it'll likely be something NO ONE really thought. "Ultra Nintendo?" Nintendo64. "Nintendo128?" Game Cube. "all the various names for Project Revolution?" Wii. (remember what a joke that was among the peanut gallery at first?) So for now, let's just call it "Wii2" or "Café" so people know what we're talking about, and we can feel comfortable in assuming that won't be the real name anyway.
- The console will almost certainly NOT use an AMD CPU. Why's that? PC CPUs use lots of silicon to be better at multi-tasking and general-purpose computing, things that are basically wasted on a console, which instead will limit multitasking to passive online services that will only need a tiny, weak co-processor, and just one potent math-heavy CPU to handle gaming and media playback.

[citation][nom]Th-z[/nom]It's not just the overheads from OS, API and graphic driver, on consoles, performance-critical areas can be coded at low level if devs want to, something PC doesn't have the luxury.[/citation]
This clearly illustrates that you don't know a thing about programming, and don't know what you're talking about. You're 100% incorrect here, and are even using the wrong terms.

You can ALWAYS write code for PCs in assembly language. Most decent-performing software relies heavily on direct assembly in order to get the best performance. You hear stuff about SSE and the like? SSE is a subset of assembly instructions. A program can't get good benefit out of them by simply compiling high-level code. To truly see its benefit, one has to write the code in assembly, to specific SSE instructions.

It's a core reason why many hasty "we'll port this 360 game to the PC" games run so poorly: it's because on the 360, they make proper use of the vector units in the CPU; standard 4-wide vector units provide up to 400% of the FLOPS compared to a traditional floating-point unit. In PowerPC CPUs (used in the consoles) the vector unit is discrete, separate from the normal, higher-precision FPU. in a PC, one really HAS to use assembly, because they use floating-point units that can act as EITHER a super-high-precision (160-bit) FPU capable of only handling one set of operands at a time, or as a 4x32-bit vector unit.

So yes, your thinking is purely wrong here: one can write low-level code for PCs and extract maximum efficiency. After all, assembly programming started on PCs first, and was originally the easiest option before high-level languages were invented.

[citation][nom]Twist3d1080[/nom]The PS3 has the equivalent to a Geforce 7900. Although still ancient, its not epic failure that the Geforce FX 5k series was.[/citation]
Not QUITE a 7900. It's hacked down a little: half the ROPs, half the memory interface. The former was in response to the latter, as with the restricted memory bandwidth, there wasn't enough to go around for more than 8 pipelines.

As for the original Xbox's GPU, it used again a GPU only slightly modified: in that case, it was a GeForce 3, altered to have 2 T&L units instead of 1. Some might take that and confuse it to mean that it was a GeForce 4 Ti, though the Xbox's GPU lacked some subtle changes the 4 Ti had, as well as support for newer versions of DirectX and their accompanying features.

[citation][nom]silverblue[/nom]The Dreamcast also used a tile-based deferred renderer which reduced a lot of the load as well as the system cost; if overdraw is sufficiently high, this yields a tangible benefit, and you can utilise slower RAM and a lower GPU clock. Compared to the PS2, it also had twice the available on-screen texture memory, and more available sound channels (if that matters to some). Yes, it was the weakest console (it lacked T&L along with features such as 5.1 surround sound, a DVD drive, etc.), but it was the cheapest to produce and the first to become available, and out of that particular generation, the first to properly bring online gaming to the masses. Let's also not forget that it was easily the most innovative of the sixth generation machines.Sega's past mistakes came to haunt them, and the Dreamcast was killed off well before its time.[/citation]
No, you're slightly mistaken on the benefits of tile-based rendering over brute-force rendering. To go over a few things:

- Yes, it DOES reduce bandwidth penalties for overdraw. That is the main benefit it has.
- It does NOT let you simply clock the CPU lower: it won't make up for outright lack of texel throughput, (or even pixel throughput in most situations) and this is where the Dreamcast having 1/4 the number of pipelines of any of its competitors comes to hurt it.
- In the bulk of cases in Dreamcast and other 6th-generation games, overdraw wasn't particularly high: higher overdraw required more geometric detail or objects, and the CPU/T&L units of 6th-generation consoles weren't consistently capable of handling all this to bring us consistent massive overdraw. (the Xbox came close, though)
- As an aside, it's worth noting that the Xbox 360 has a very similar tile-based deferred renderer, making use of a heft 10MB EDRAM tile buffer. However, in spite of this, it doesn't seem to grant the 360 particular benefits over the PS3: indeed, the 360 typically runs games at lower resolution than the PS3 to compensate for using both HDR and AA simultaneously, while PS3 games (due to G70/RSX design limitations) could only use one or the other.

Remember that the Dreamcast has a rather feeble CPU, again: clocked to 200 MHz, and with a single 128-bit 4-wide vector unit, (which is where Sega got the silly "128-bit" claim) that has to be used for T&L. All competing CPUs clocked higher, and excepting the PS2 (which had a PAIR of vector units to make up for it) also had hardware T&L. (the GPU-based T&L is what allowed the GC to get away with only a 2x32-bit vector FP unit) Geometric detail, along with complex handling of large numbers of objects, would thus be a critical weakness of the Dreamcast compared to the other three consoles. And without this, overdraw situations would find it bottlenecked by the CPU well before any benefits of the tile renderer would typically show.

I will agree that the Dreamcast didn't get its full chance... But that was largely due to Sega, which had a history of botching launches and handling things. It DID have an impressive lineup shortly after launch, (Shenmue, anyone?) even if, as before, it was weak outside of the STG and Fighting genres. Mostly, it was Sega's decision to give up when it was clear that the PS2 would pass it is what doomed it entirely... Otherwise it likely would've taken a solid 2nd place, leaving the Xbox and GC in a more distant 3rd and 4th. (and 3rd-party sappage could've in fact doomed the GC) The Dreamcast was definitely the weakest console, but that's not what doomed it in the slightest. (after all, even over the next few years, there's no way the 360 or PS3 will catch up to the Wii)

The real problem rested within the minds at the head of Sega. The design reflected part of this: Sega never DID quite understand how to engineer a console; Nintendo did. Sega's a maker of arcade booths, and their strategy had been to take an arcade machine and scale it down to an affordable home console: this resulted in the market misses that the Master System, Saturn, and Dreamcast were. (the Genesis was the one exception, and it showed, giving the SNES a very cuthroat run for its money)

But even with the Genesis, some of their marketting showed Sega's main psychological weakneses: they were too intent on their release combating the OUTGOING generation. Remember, "Genesis does what Nintendon't?" It was advertising the advantages the 4th-generation Genesis had over the 3rd-generation NES. This kept up and was reflected in later systems, especially the Dreamcast: the design choices scream of a system designed primarily to trump the N64, rather than looking ahead. Nintendo (and later Sony) avoided this mistake by recognizing the cyclical, generation-based nature of the console wars. This sort of short-sightedness in design, coupled with
 

silverblue

Distinguished
Jul 22, 2009
232
0
18,830
[citation][nom]nottheking[/nom]No, you're slightly mistaken on the benefits of tile-based rendering over brute-force rendering. To go over a few things:- Yes, it DOES reduce bandwidth penalties for overdraw. That is the main benefit it has.- It does NOT let you simply clock the CPU lower: it won't make up for outright lack of texel throughput, (or even pixel throughput in most situations) and this is where the Dreamcast having 1/4 the number of pipelines of any of its competitors comes to hurt it.- In the bulk of cases in Dreamcast and other 6th-generation games, overdraw wasn't particularly high: higher overdraw required more geometric detail or objects, and the CPU/T&L units of 6th-generation consoles weren't consistently capable of handling all this to bring us consistent massive overdraw. (the Xbox came close, though)- As an aside, it's worth noting that the Xbox 360 has a very similar tile-based deferred renderer, making use of a heft 10MB EDRAM tile buffer. However, in spite of this, it doesn't seem to grant the 360 particular benefits over the PS3: indeed, the 360 typically runs games at lower resolution than the PS3 to compensate for using both HDR and AA simultaneously, while PS3 games (due to G70/RSX design limitations) could only use one or the other.Remember that the Dreamcast has a rather feeble CPU, again: clocked to 200 MHz, and with a single 128-bit 4-wide vector unit, (which is where Sega got the silly "128-bit" claim) that has to be used for T&L. All competing CPUs clocked higher, and excepting the PS2 (which had a PAIR of vector units to make up for it) also had hardware T&L. (the GPU-based T&L is what allowed the GC to get away with only a 2x32-bit vector FP unit) Geometric detail, along with complex handling of large numbers of objects, would thus be a critical weakness of the Dreamcast compared to the other three consoles. And without this, overdraw situations would find it bottlenecked by the CPU well before any benefits of the tile renderer would typically show.[/citation]

Okay, maybe not a slower GPU clock, but you could certainly use slower RAM. That's one of the things that the Kyro II was known for - 175MHz SDRAM when everyone else was moving to DDR. TBDR reduces bandwidth usage hence not needing such fast RAM.

The Dreamcast was rated at 3m lit and textured polygons/second, however Sega were rather conservative with that figure (quite unlike the PS2 hype that flew around at the time about 75 million polygons or some other rubbish) and the machine did exceed that rated figure (Le Mans 24 Hours, for example). It may not seem a huge step up over the N64, for example, but the large improvement in geometry calculation plus the constant use of standard definition should really put to bed any N64 comparisons. For a while, developers were struggling to get to grips with the PS2, so Dreamcast graphics weren't exactly behind the curve at the time even if the PS2 was several orders of magnitude more powerful. For what it's worth, in situations where overdraw would really stack up, such as Unreal Tournament, the Dreamcast version fared much better.
 

swamprat

Distinguished
Apr 20, 2009
108
0
18,630
[citation][nom]zak_mckraken[/nom]Seconded. Although, I would settle for only the Firefox Babe.[/citation]
I miss being able to give the thumbs up.

More importantly than all of the hardware based chit-chat - what a blunder on the press release grammar!
between it’s launch
"Its" launch, tisk tisk. Whoever proof read that should be relieved of they're job...
 

Th-z

Distinguished
May 13, 2008
9
0
18,510
@nottheking

Perhaps I wasn't clear enough. I was talking about low level access to the graphic hardware (programming for the GPU, not CPU).
 

theroguex

Distinguished
Dec 9, 2009
70
0
18,580
[citation][nom]kinggraves[/nom]Some of you young pups might not realize, but back in the days developers would make two versions of a game, one for consoles and one for PC. That way, both could be fully utilized to their purpose. The only thing that prevents them from doing that now is that developers are lazy and don't want to put in the effort.[/citation]

Totally incorrect. The developers would love to do it old school like that, but their publishers don't allow it. Consoles are much more popular than gaming PCs, and thus the publishers budget primarily for them. PC versions anymore tend to be afterthoughts, ported from the console version because it's cheaper. It all comes down to money.
 

jasonpwns

Distinguished
Jun 19, 2010
70
0
18,580
[citation][nom]nottheking[/nom]As I mentioned on the PREVIOUS rumor-spreading article here on Tom's, there was just the console in history to prove this point: the Neo Geo. Adjusted for inflation, it had an MSRP of $1,070US. In 1990, it presented 2D graphics vastly exceeding the SNES or Genesis, allowing for hundreds of giant, on-screen sprites with approximately 16 times as many colors on-screen at once. It also supported massive cartridge sizes of over a gigabit.Of course, given its high price tag (and the high price tag of such massive carts) the thing never took off. It wasn't even "ahead of its time," since it was limited to 2D graphics. It was just a vastly more expensive console, and people decided the extra power wasn't worth the cost.Actually, it's pretty much available to anyone who bothers to do a clean boot. And in the graphics department, there's no magical "overhead" the PC gets slowing it down: both the Xbox 360 and PC use the EXACT same abstraction layers, and close to the exact same DirectX specifications. It's why, say, an Xbox 360 gets 30fps on Mass Effect 2 at 1280x720 on medium-ish settings without AF, while on the PC, a 4870 can get >90fps at 1920x1200, with maximum details, DX 10, AFx8, etc.Naw, hardware-wise, the Dreamcast was exemplary of some of Sega's problems that drove them out of the industry: they tried an arcade-type architecture, but scaled it down to make it affordable. I'm sure a serious fan like you would remember that, adjusted for inflation, the Dreamcast was possible the cheapest console ever at release?Overall, this approach, the opposite of what SNK took with their ill-fated Neo Geo, showed its flaws: the Dreamcast WAS the weakest: it had the least RAM, (26MB total, compared to 40+ on all the competitors) the weakest processor (the SuperH-4 didn't have more than a single 4-wide vector processor and only ran at a pitiful 200MHz) the least RAM bandwidth, and a GPU that was clocked far slower, and more resembled the N64's Reality Co-Processor in that it had but a single pixel pipeline with two TMUs, versus the four pipelines on its competitors. (the N64 ALSO had a single pipeline with two TMUs... And ran at almost 2/3 the clock rate)No, I'm not ignorant, I just happen to be someone without love for any particular console, and a level of hardware understanding rare outside the likes of Intel, AMD, or nVidia. Resident Evil 4 was an anomaly here: most recognize that Capcom spent a lot of time to ensure that the GC version was optimized as much as possible.Also, I'm convinced that the PS2's CPU was more potent: it had a lower clock rate, but unlike the Game Cube's, it had far better floating-point capabilities, thanks to the fact that it had a pair of 4-wide vector units PLUS a conventional single-issue floating-point unit... The Game Cube, on the other hand, had a single unit that could do either a single 64-bit FP operation or two 32-bit ones. That meant that in 32-bit single precision (the kind used for performance measures) the PS2 could handle 4.5 times as many per clock cycle: 18 vs. 4. (FP ops are counted in double since the most common instruction, "multiply-add" handles two operations per clock cycle. It's universal for all processors)[/citation]

Sorry, but the gamecubes processor was more powerful and that was noticed in games for both consoles, the PS2 also had less graphics capabilities. PS2 pulled less frames, stuttered at action scenes in some games, and had issues whereas the gamecube could handle that. The gamecube processor was not only faster in clock speed it also had a more cache and the architecture used to interact between the memory, processor, and gpu was more superior than the PS2. On top of that the gamecube used a smaller dye...
 

nottheking

Distinguished
Jan 5, 2006
311
0
18,930
[citation][nom]silverblue[/nom]Okay, maybe not a slower GPU clock, but you could certainly use slower RAM. That's one of the things that the Kyro II was known for - 175MHz SDRAM when everyone else was moving to DDR. TBDR reduces bandwidth usage hence not needing such fast RAM.The Dreamcast was rated at 3m lit and textured polygons/second, however Sega were rather conservative with that figure (quite unlike the PS2 hype that flew around at the time about 75 million polygons or some other rubbish) and the machine did exceed that rated figure (Le Mans 24 Hours, for example). It may not seem a huge step up over the N64, for example, but the large improvement in geometry calculation plus the constant use of standard definition should really put to bed any N64 comparisons. For a while, developers were struggling to get to grips with the PS2, so Dreamcast graphics weren't exactly behind the curve at the time even if the PS2 was several orders of magnitude more powerful. For what it's worth, in situations where overdraw would really stack up, such as Unreal Tournament, the Dreamcast version fared much better.[/citation]
It's worth noting that for the PS2 and Xbox, the "polys/sec" figure was a theoretical peak, so yeah, you can't compare them to the Dreamcast's. Specifically, it counted un-lit, un-textured polygons: 75 million/sec is what you'd get with a single 4-wide vector unit @300MHz. (a 4-wide vector unit can process a triangle in four clock cycles; traditionally one cycle for each vertex, plus a fourth cycle for the normal)

Nintendo's figures, though WERE more real-world... And they estimated approximately 20 million, (assuming trilinear filtered textures and an average of two light sources per polygon) by using a dedicated hardware T&L unit in the GPU. Real-world situations (read: in games) for the PS2 put it at close to that performance level: typically 10-19.5 million/sec, though some games, of course, could surpass that. I would note that geometry is the one area where the Dreamcast DID manage to hold closer to the PS2&GC than the N64; the N64's hardware T&L was *very* prototypical, and its real-world performance was only stated by Nintendo as 120k trilinear/multi-lit polys/sec... Though obviously many games show far higher levels, (typically 240-450k) but still all well below 1 million, leaving the DC's 3 million closer (in exponential terms) to the PS2 and GC than the N64.

Later N64 games did make heavy use of standard-defintion resolutions: in fact, it was standard (or a standard option) for any game that used the expansion pack, including even higher-end games that essentially required it, like Perfect Dark. If one wanted to compare total detail of what the consoles were capable of, it'd only make sense to take the most stunning games of each. I think you might agree that Shenmue would be a good candidate for the Dreamcast, while many forget that for the N64, there was a VERY solid port of Indiana Jones and the Infernal Machine.

[citation][nom]Th-z[/nom]@notthekingPerhaps I wasn't clear enough. I was talking about low level access to the graphic hardware (programming for the GPU, not CPU).[/citation]
Then you're even more wrong: low-level programming for the GPU is the ORIGINAL STANDARD. In fact, programming in high-level wasn't even truly possible for GPUs until HLSL was invented a few years AFTER shaders became standard. And since critical shaders are comparatively small pieces of code that get repeated heavily, most development *IS* done in low-level. In fact, it is the only way possible to extract features not part of a specified, required Direct3D set. For instance, it was the only way to achieve tesselation in any card pre-DX11. There is also irony here: you act as if high-level GPU programming is not found much on console games. Again, remember that most PC games now are just ports of Xbox 360 games anyway... And no, since both use DirectX, they typically don't bother to re-write the shaders at all. Since both use DirectX to abstract the GPU, one can execute the same GPU code on both.

Take a look at Tom's benchmarks: you can see the similarities. Consoles don't somehow magically extract more power from their GPUs: the PS3's performance doesn't even match up to an ordinary 7900GT (off which the RSX is based upon, albeit cut down from) let alone surpass it. To think or argue otherwise is purely fallacious.

[citation][nom]jasonpwns[/nom]Sorry, but the gamecubes processor was more powerful and that was noticed in games for both consoles, the PS2 also had less graphics capabilities. PS2 pulled less frames, stuttered at action scenes in some games, and had issues whereas the gamecube could handle that. The gamecube processor was not only faster in clock speed it also had a more cache and the architecture used to interact between the memory, processor, and gpu was more superior than the PS2. On top of that the gamecube used a smaller dye...[/citation]
The "GameCube processor was more powerful" is an oft-repeated statement that's never backed up. To properly compare it, it's a modified PowerPC 750CXe; the only improvements it really supports are a few new instructions, primarily centered around SIMD support, allowing it to treat its single 64-bit FPU as a 2x32-bit SIMD/vector unit.

Those of us a bit more familiar with PowerPC processors would know that the GameCube's CPU is what mac fans called the "G3;" the 486 MHz speed is a bit faster than typically used in G3 macs, (albeit lower than the ones actually using the CXe rather than earlier 750s) though the GameCube's 256KB of cache is actually lower. (G3 iMacs and PowerMacs typically used 512KB) And how did these machines stack up compared to Intel/AMD-based CPU-using ones? They got utterly slaughtered. Yep, this was around the same time that Intel released their 733 MHz Pentium III, very similar to the CPU that would feature in the Xbox.

So no, the larger amount of cache the GameCube's CPU had (256KB vs. 128, 40KB, and 24KB for the Xbox, PS2, and DC respectively) didn't really help it out. And most of us enthusiasts know that clock speed on its own matters little; things like the Pentium 4 proved this quite aptly. GPU-wise, it's close to a toss-up, admitedly: the PS2 does sport a 4:8 pipeline/TMU array, compared to only 4:4 for the GameCube. Geometric performance is, in real-world situations, pretty close; the GC may have a dedicated T&L unit, but the PS2 has an almost-entirely dedicated vector unit in the CPU for the exact same purpose. Overall, the PS2 is more potent, with a superior CPU (capable of handling 4-wide SIMD rather than 2-wide) and better memory. (one large pool of 32MB, while technically 16 of the GC's 43MB is super-slow ARAM only truly useful for buffering media from the disc)

Worse yet for the GameCube, it doesn't have full-on DMA support, which is a feature that allows a computer to not have to use the CPU to transfer data from one memory section to another. For the GameCube, it appears the only supports DMA between the ARAM and main DRAM; needing the CPU to manually transfer textures into that 2MB texture cache does reduce its usefulness; without full DMA, it fails to be the silver-bullet in graphics performance that its boosters claim it is... And in real-world applications, it often means that the GC is typically left to only run on 2MB of textures at a time.

And overall, if a chip uses a smaller die on the same fabrication process... THat's because it has fewer transistors, and hence is simpler, and almost certainly less capable. The PS2's Emotion Engine is larger because it has a pair of 4x32-bit SIMD units, PLUS a separate FPU, while the Game Cube sports only a single 64-bit FPU that can adjust to act like a 2x32-bit SIMD unit.
 
G

Guest

Guest
Nintendo needs better games not more system components. The Wii's design by itself is a great concept, but the company failed to keep consumers interested with the lack game options, and the game play tends to be more children oriented. The need more mature games for adults.
 
Status
Not open for further replies.