[citation][nom]AlexTheBlue[/nom]1) SGX can do anything you can do with DX9. So it's up to the devs. Performance is the limiting factor for SGX, SGX XT, and 3DS[/citation]
True for the most part; simply listing feasible features makes it much more understandable than going through and detailing an analysis of how it'd handle each situation.
The other caveat there is that you can't just say "DX9 support." As both ATi/AMD and nVidia have demonstrated through news reports time and again, you most certainly can "cheat" there and not have all that's claimed.
[citation][nom]AlexTheBlue[/nom]2) 360 can do 3D.[/citation]
Ah, my bad, then; I had missed where they'd enabled it.
[citation][nom]AlexTheBlue[/nom]3) The Genesis had a faster CPU, period. Clock per clock, the SNES processor was faster, but in real usage scenarios it is just not as fast. Regarding its sound, I wasn't trying to say its sound system was superior, I was trying to say it wasn't at as big a disadvantage as people think. It's just harder to get good sound because it doesn't support ADPCM, etc. The Yamaha YM2612 FM chip, Texas Instruments SN76489 PSG, and the Z80 was a decent combo.[/citation]
The Genesis' CPU was certainly weaker in the end; all of its instructions took many more cycles; at least double, often far more, to execute. None of the base 65c816 instructions took longer than 7 cycles, while ALL of the 68000's took a minimum of 2, with most needing 4-10, and some as many as 26+; with virtually all instructions that are "the same" on each, the 68000's equivalent tends to take twice as many cycles to complete. Hence, for proper comparison, at a MINIMUM you'd have to halve the given frequency of the Genesis' 68000; (As it can only begin an instruction on every other clock cycle) in practice, it's not entirely unlike the input clock frequence of the SNES. A further problem is that the 65c816 family is fully pipelined, allowing it to start a new instruction potentially every cycle, running multiple simultaneously. Meanwhile, the 68000 didn't have proper pipeline capability; while it could fetch and begin decoding a second instruction while the first operated, it meant that it had to sit waiting while the first one ran to do much of anything.
As for the audio, while of course you can make decent music with the Yamaha/TI pair the Genesis has, the same could really be said even for the NES, where it has something only marginally better than just the TI synth. Overall, though, the Genesis is far more comparable to the NES than the SNES in terms of audio capabilities. The lack of decent compressable audio (such as ADPCM) is PART of it, yes, but far from all; even if the Genesis could reliably make use of its single potential PCM channel for percussion, that'd allow for just one track of it, and deny the use of it to anything else. (such as voice) And from my memory, the TI synthesizer was so terrible that its white noise channel wasn't used for percussion, instead favoring heavily distorted, stacked sine waves, hence the distinctive, odd sound to Genesis music.
But overall, in audio capabilities, things are kinda hands-down there. the Genesis gets 8 KB to work with to program its synthesizers, vs. the SNES with 64 KB, and decent compression, (making it equivalent to up to 256 KB) allowing it to pack a decent equivalent to what we'd call a "Sound Font" today.
[citation][nom]AlexTheBlue[/nom]4) You've obviously never imported any 2D fighting games or shooters for Saturn. Look at Radiant Silvergun, Darius Gaiden, Street Fighter Zero 2/3, X-men vs Street Fighter (better port than PSX), Thunder Force V, Soukyugurentai. The Saturn's video chips were BOTH excellent at 2D.[/citation]
Yes, the '3D' chip had to handle sprites... Nonetheless, it was still not a very good setup; together, the 2D system was marginally better than, say, the Game Boy Advance. (which itself was just an incremental upgrade over the SNES) I've seen several of those games; actually played a bit of
Radiant Silvergun. Visuals-wise, though, I stand by my statement; RS seems to be a little lackluster compared to, say,
Star Soldier: Vanishing Earth for the N64, and
Killer Instinct Gold pretty readily displaces what I've seen in Saturn games, including those you've mentioned.
[citation][nom]silverblue[/nom]Woah. It's a far cry from desktop integrated to mobile graphics, and let's remind ourselves just how poor Intel's drivers have been. According to specs I've found floating about (hey, it seems to be the thing to do nowadays), the 5-series SGX chips have a feature set equivalent to DirectX 10.1 and OpenGL 2.0. The PICA200 is reported to do 40 million triangles and 400 million pixels per second at a clock speed of 100MHz, however the SGX has deferred rendering and thus doesn't waste its time on invisible geometry, thus automatically being more efficient. PICA200 is supposed to be a very developer friendly GPU, which could explain its adoption, but until we see two identical demos side-by-side on both SGX and PICA, we're not really going to know. Saying it's nearly as powerful as the PS3 is a little far-fetched, though.[/citation]
Key words being "minus the screen size difference." Indeed, dropping from 1280x720 to 800x240 does mean you only need ~21% of the bandwidth/throughput for drawing, (33% of what's needed for the 360's common "576p" mode) though some other forms of throughput
necessarily drop in need.
As far as the SGX, it appears to be more-or-less full OpenGL ES 2.0 complaint, though I don't think it's complaint with any recent version of DirectX, and most certainly not 10.1. Although good information seems hard to find, I'd say its shader capacity is considerably more limited; I don't think it could run most standard HDR algorithms, such as OpenEXR, which is considered the baseline for "true" HDR; and is/was the standard for HDR on the 360 and PS3. (at the very least, I certainly don't see any iApps using it!)
Similarly, deferred shading is something that pretty much any modern GPU (including the 360's and PS3's) are quite capable of; it's just a matter of the developers chosing to use it. The tile-based part isn't as serious an advantage as it's made out to be: the Xbox 360 operates on a similar system, but it doesn't seem to grant it a particularly potent advantage over the PS3. Thus, "automatically more efficient" seems more a technicality than anything. (And IIRC, Nintendo seems to prefer portal-based rendering anyway, which makes tile-based drawing completely superflous)
[citation][nom]silverblue[/nom]Having programmable shaders is a big plus for the 3DS, however remember the resolution and screen size are going to make it look far better than it would if shown at, say, proper PS3 or Xbox 360 resolution, or on a large television as seems to be the norm. Regardless, any shaders are going to be better than no shaders at all.Not quite. Perhaps the engine and the amount of overdraw it generates just doesn't suit the PICA200 architecture? And again, the 3DS can't hope to rival the PS3, a machine whose GPU runs at 550MHz.[/citation]
Yes, I did, above, mention I'd qualified the capabilites on the resolution difference. Also, the GPU clock isn't everything, but even there, the PICA200 can run at up to 400 MHz, which seems to be the agreed-upon clock speed for the 3DS, not the 133 MHz figure that IGN gave that is gauranteed to be incorrect.
As for overdraw, that primarily drains on two resources: pixel throughput (handled by the ROPs) and memory bandwidth. The former isn't really limited on the PICA200; like most contemporary embedded embedded solutions (high-end mobile, 6th-gen console) it handles four pixels per clock, giving it up to 1.6 gigapixels/sec; this grants it up to an overdraw factor of x8,333 at 800x240, contrast to the PS3 at x4,774 at 1280x720. Verily, it has no shortage there. (even if it WAS 133 MHz, it'd still be x2,777)
Rather, the limit comes from memory bandwidth; both the ARM-11 and PICA200 are designed more or less exclusively for mobile DDR2 on a 64-bit interface. Give or take on the speed chosen, this'll get them ~4.26 GB/sec of bandwidth. Given that the bandwidth usage is chiefly dependent on the resolution, this almost cleanly evens out: about 19% the VRAM bandwidth of the PS3 for at most 21% the resolution. (less resolution if the PS3 is running higher than 720p)
[citation][nom]silverblue[/nom]Pretty graphics do not a console make. It didn't have an advantage over the Saturn and PlayStation in terms of resolution nor colour depth, only in potential polygon output and more modern effects such as anti-aliasing and mipmapping. The next console up was the Dreamcast which was massively superior in terms of polygon output and specs, and ran at a constant 640x480. Considering that the Dreamcast was generally inferior to the Playstation 2, and the N64 wasn't a large step up from its 32-bit contemporaries, I simply can't agree that the N64 was nearer to PS2 performance levels. What's more, the decision to make it a cartridge-based system was a bad choice and helped to curtail the system's true power.[/citation]
Actually, it had PLENTY of an advantage over the Saturn in particular. The Sega Saturn's resolution and color depth were hard-capped by the fact that
it has a fixed 256KB framebuffer size. Do the math; if you want 24-bit color depth, that limits you to ~87 kPixels... Or in other words, approximately 320x240. The much-vaunted
Virtua Fighter 2 showed that if you wanted "high-resolution" (be it 512x512, 704x372, or any of the other such EDTV resolutions it supported) you had to drop color depth to 8-bit palettes.
On the flip side, the Nintendo64 technically had nothing outright PREVENTING it from running HD resolutions; the only reasons it doesn't both involve soft hardware limits. One, it'd require obscene amounts of RAM; for 24-bit color, you'd need 2.64 MB to do 720p, and 5.94 MB for 1080i/p; Given that even with the expansion the machine only had 8 MB, that's a lot to use. Secondly, once you get the resolution that high, the N64's single-pipeline GPU couldn't afford any overdraw lest it lag below 30fps, which made such a resolution impractical for gaming, though not impossible. As a result, the N64 typically capped at 720x480 @24-bit, approximately four times the maximum the total the Saturn could do.
As for the Dreamcast, it actually wasn't ALL that more potent than the N64, particularly graphics-wise. Sega seemed to never understand the "generational" part of the console wars. Hence, they made the Dreamcast to beat the N64... Rather than prepping it to fight against the PS2. (This is the main reason it failed) The GPU, in particular, was weak: while the PS2, Game Cube, and Xbox would all use 4-pipeline GPUs capable of handling 8 textures per clock, the Dreamcast was stuck with a 1-pipline GPU, handling only 2 textures, the SAME as the N64. Its only advantage over the 64 was that it clocked at 100 MHz instead of 62.5... But of course, when the PS2, GC, and Xbox clocked at 147, 162, and 233 MHz respectively, and all incorporated more dedicated hardware for polygons while the DC had to rely on its ONLY 128-bit SIMD unit for polygons...
I do actually retract my statement comparing the N64 to the PS2; taking another look shows I'd grossly under-estimated the latter. However, the DC lags badly behind the other 3 6th-gen consoles, looking FAR closer to the N64 than any of the others. I mean, the games show this; we'll take what consensus declares the most impressive title on each console:
Shenmue for the Dreamcast, and
Indiana Jones and the Infernal Machine for the N64. Both games ran at 640x480, and look actually comparable. I mean,
here is Jones, (note the N64 interface showing it's not the PC version) And here's a
similar shot from Shenmue; the latter shows slightly better texture resolution and polygon count, but overall rather close. And of course, the specs bear it out; GPU-wise the DC was by far closer to the N64 than even the Saturn, really; it boasted 60% higher throughput on both pixels and textures, and in total memory bandwidth, it was 46% higher than the N64 with the expansion pack. (177% higher than without the expansion) Conversely, it was little more than a quarter that of any of the other 6th-gen consoles.
[citation][nom]silverblue[/nom]Whoa whoa whoa... GDDR3? What are you talking about? That didn't come to consoles until the 360. Also, the GC was not as far ahead of the DC as you make out. I should also point out that the DC was dead by the time the GC came out and as such there's little point in comparing with hardware that's three years newer.[/citation]
You mis-read: the Wii was the one that used GDDR3; I was comparing/contrasting the Wii to the GameCube. Also, regardless of being "dead already," it was still part of the 6th generation, just as much as the Saturn was part of the 5th, and TurboGraphx16 the 4th. Just because Sega made the mistake of not understanding generations, and trying to compete against an outgoing console doesn't make up for the fact that the hardware they picked was far more comparable to the N64 than the other 6th-generation consoles that would crush it and cause Sega to abandon consoles entirely. (46-60% better graphics capabilities, vs. 1/4-1/10th that of the PS2, and far worse against the GC and Xbox) To say nothing of the fact that the DC was the only 6th generation console to use CD-based tech instead of faster and higher-capacity DVD-type discs...
So yeah, there's a point in comparing/contrasting the hardware; it helps explain why Sega and its Dreamcast failed.
[citation][nom]silverblue[/nom]Also, memory bandwidth isn't the be-all-and-end-all but it certainly helps. Maybe so, but the other two consoles had unified RAM. Consoles traditionally never have enough of the stuff.[/citation]
Actually, in terms of graphical capabilities, the memory bandwidth available to the GPU for both texture sampling (read) and buffer writing (write) tend to have a stronger correlation to graphical performance than anything else.
A unified memory architecture isn't inherently superior to a split one; both have their advantages and disadvantages. I can liken the advantages demonstrated by a split one for graphics akin to the multi-leveled system of cache CPUs have nowadays. The key in how much a difference it makes lies in how it's used... Which also relies on how well-thought-out the arragement is.
At one end is a system like the Sega Saturn; with 8 different banks of memory outright segregated by WHAT could use them, it was clearly a major liability. Most importantly, only 1.5MB of its memory was dedicated to video, and of that, a whopping 512KB was dedicated exclusively to 2D backgrounds, which is downright excessive for anything that's not a pure-2D fighter. (or potentially a 2D shooter) Similarly, framebuffer size was capped at 256KB, (704x372, ~400x300, and ~320x240 for 8, 16, and 24-bit, respectively) and a paltry 512KB was left for texture data, hardly befitting a 3D machine. In this case, the Saturn would've been MUCH better served for at least unified graphics memory.
On the flip side, MOST consoles can gain from separate interfaces, which can, say, provide the CPU with lower-latency memory (CPUs usually prefer low-latency to higher-bandwidth) and increasing the bandwidth to the GPU a lot. This is how the SNES/Genesis handled it, as well as how the Dreamcast WOULD'VE had the CPU/GPU memory banks not been identical in performance. For things like the PS2 and Game Cube, they functioned as "tiers" that made programming a little trickier, but could yield higher bandwidth than the Xbox, slight offsetting the latter's vast advantage in CPU and GPU strength. Lastly, the Xbox itself had purely unified memory, which allowed it greater flexibility to proportion capacity/bandwidth between the CPU and GPU, making it by far the easiest console to program.
[citation][nom]silverblue[/nom]I think the Mega Drive/Genesis could output much higher than 8KHz, however I can't prove anything. However, the SNES used 32KHz sound and not 44 as far as I remember. Additionally, the Sega machine could utilise the 4-channel PSG at the same time as evidenced on so many of their games. I did read once that you could achieve a 96KHz rate from the DAC but it wasn't recommended. That bus was a killer, though. Not sure I agree about the Mega Drive being slower, however the 5A22 certainly had the higher IPC.One thing I should note is that the Saturn didn't have hardware ADPCM, something I still don't understand today. However, it was perfectly capable at handling it in software as evidenced by NiGHTS and Burning Rangers.[/citation]
The Genesis' output was higher than 8 KHz. However, its PCM support was limited to 8 KHz, 8-bit. (and I THINK mono only, but I'm not gonna claim it) The SNES had either 32/44.1 KHz output, and supported ADPCM samples up to that sampling frequence, with a depth of 16 bits, and stereo; so if not CD quality, then very close to it. As for 96 KHz support, the frequency of the Genesis' DAC was fixed, though I've known plenty of people to rip the FM/PSG code from games and run it on newer, stand-alone emulation that could output to whatever sampling rate was desired; since Genesis music doesn't contain "instrument/voice/waveform" data, merely note data, it's the same as taking an old MIDI and then running it with a modern, high-end multi-gigabyte SoundFont; in either case, instant higher quality.
As far as the Saturn's design, it in particular I chalk up to Sega simply not knowing how to make consoles; their forté was (and still is) arcade machines. In those, running straight PCM and having lots of RAM along with massive data backbones was perfectly fine. I feel the Saturn was an apt demonstration of why one can't approach console design like one can approach arcade design.