Nvidia Declares The CPU Dead

Status
Not open for further replies.
G

Guest

Guest
Nvidia has sent around an email to tech journalists declaring the CPU essentially dead, we learned today from a
 

hcforde

Distinguished
Feb 9, 2006
4
0
18,510
They say the best defense is a good offense. Nvidia's CEO has an interesting way of looking at things. Another way would be to say that Intel's CPU's are so far ahead of GPU offerings that Intel is dead in the water in HAVING to push out faster and faster CPU's at this point. If the GPU market were on par with the CPU market then there would be no issue with games like CRYSIS. In highend games there seems to be a GPU bottle neck Mr. Nvidia, not a CPU bottleneck. I my opinion when the GPU's are on par with the CPU's so there is no bottle neck in demanding games, then we can look at the issue of the CPU being dead.

If you are talking about the convergence between the 2(GPU & CPU) then by all means the statement is closer to being correct. BUT, thet is what progress does. Transistors are no longer discrete units (in most applications)but are incorporated into a larger process unit. SOOO.... why NOT CPU's and GPU's. If they can work more effeciently via intergration of the 2 most intensive components in a computer system then lets do it as quickly as possible. However it does not mean the CPU is dead. While I am not affiliated with the industry an any way other than being a user of the technology, I tire of these captains of industry attempting to pull the wool over my individual and our collective eyes in an attempt to focus critism away from their own inability to keep up. If the CPU is dead MR. Nvidia them please run one of your precious video cards without one, TODAY!!! Otherwise, if you are adept at engineering I suggest you get back to it. You seem to have too much time on your hands.
 
G

Guest

Guest
"I my opinion when the GPU's are on par with the CPU's so there is no bottle neck in demanding games, then we can look at the issue of the CPU being dead."

Games are for GPUs, and GPUs are for games. Don't be pretending like a CPU does the same work in a game that a GPU does. Using your definition of "On Par" you should look at it this way - "When CPU's do all of there work real-time so there is no wait in the work they do, just as GPU's do their work real-time without wait, then we can discuss blah blah blah". The CPU is the bottleneck in video rendering far more than the GPU is the bottleneck in current games, if you want to compare apples and oranges.
We have no idea what context that message from Nvidia was written in, but I'm sure you took it far from it. I think we have to wait and see if and why an Nvidia would compare their franchise with the CPU realm.
 

hcforde

Distinguished
Feb 9, 2006
4
0
18,510
Sorry averagejoe but a number of benchmarks show the bottleneck as being the GPU. We call it scaling and we find that the muster in the GPU runs out. What we need is a single programable GPU/CPU/Physics chip that can adapt to a game in a multi-threaded fashion.

That takes it out of just the realm of the chipmakers but into the realm of the coders that find it difficult to code multi-threaded applications.

Why do you think AMD bought ATI? Just for fun or for the possibility of convergence. INtel is looking to do the same thing. Why do you think Nvidia bought AGEIA? When was the last time you had an Nvidia SLI setup on an Intel chipset based Motherboard?

Apparently you have not been keeping up with the banter and barbs being thrown out by the Intel & Nvidia chiefs. Do a Google first then tell me how far I have taken it. Anything that I know that is dead is pretty much useless--again let's see that great Nvidia card run without a CPU.
 
G

Guest

Guest
You got it backwards there squanto (hcforde). Let's see how good Intel cpus run Crysis with a Nvidia graphics card. And somehow you seem to have to much vested interest in defending Intel there..., hmmm. Yet you say you dont work for them
 
G

Guest

Guest
Nvidia makes good stuff but I can't wait to see how intel ONCE AGAIN crushes the competition (NVY), say all you want, remember AMD processors two years ago being the fastest, and then all it took intel was to come out with the core 2 duos to crush them, now AMD can only dream about catching up, well I predict same thing will happen to NVY it was good while it lasted. ONCE AGAIN, REPEAT WITH ME, INTEL... WILL... CRUSH.... NVIDIA..... period!!!!!
 
G

Guest

Guest
In my opinion, there is nothing really wrong with hcforde's take on the topic of Nvidia's comments, after all, aren't we all, in our own ways, interpreting information based on our frame of reference. Who's to say who's right and wrong ?

Both hcforde and averageJoe present valid arguments, just from different perspective. I would also like to share my views on it by commenting on some of your comments,

"Games are for GPUs, and GPUs are for games. Don't be pretending like a CPU does the same work in a game that a GPU does"

I agree with this in that, it isn't really fair to compare CPU's with GPU's. They are basically designed for different purposes (though the gap is admittedly closing). With the increase in the use of advance media formats (visual especially) in everyday computing, this will no doubt make the role of video processors more and more important.

"The CPU is the bottleneck in video rendering far more than the GPU is the bottleneck in current games, if you want to compare apples and oranges."

Sorry bub, but I can't agree with you on this. People can claim to have seen benchmarks, or read about them, but based on my personal experiences, an average CPU with a powerful GPU will always beat an average GPU paired with a powerful CPU, hands down. With the exception of playing in really low res. But since we're talking about graphic capabilities, higher res performance (meaning higher than 1024 x 768, preferably 1600 x 1050 up) should be used as the baseline for comparisons.

"You got it backwards there squanto (hcforde). Let's see how good Intel cpus run Crysis with a Nvidia graphics card."

cozomel, correct me if I'm wrong, but I think you meant "Let's see how good Intel cpus run Crysis WITHOUT a Nvidia graphics card." In any case, at the moment, neither the CPU or GPU can perform well without the other, and I'm sure everyone here can agree on that.

Basically, I think I can understand hfcorde reaction because, I too, tend to think that the letter by Roy Taylor sounds somewhat conceited. With a statement like "The CPU is dead" it's kinda hard to take it any other way then to think that they (Nvidia) are confident that the CPU is, or will no longer be relevant, in the near future. While in my opinion, it isn't really a matter of whether the CPU will win over GPU or vice versa because they will, if what they claim to be true, become one eventually. But is this really the case ?

Perhaps, but I also believe that because there will always be a difference between the requirements of the "high-end" segments and those of the "average" users, neither CPU or GPU makers will really dominate one or the other. Even this was stated in the article mentioned above.

I'm not with either nvidia or intel, but I do work in the industry they both operate in. And though I don't claim to be an expert, part of what I do involves research into the technologies as well as the trends developing in that area.

One can say many reasons, or use research results as the basis for arguments, but for me it really comes down to this. Unless either nvidia or intel can come up with a CHEAP and EFFICIENT solution that integrates both functions well (graphic processing and general processing) , then the market will still need both of them, separately.

As technology develop, this will undoubtedly become possible. But then again, as technology progresses, our expectation about what is possible also increases, and we manage to find ways to put even MORE demands on what those chips can do.

I doubt any one chip can always be ready to meet those demands. Which is why, in my opinion, there will always be companies ready to capitalize on individuals seeking "more" out of their computing experience (whether for games, medias or whatever)by developing additional hardwares to meet those needs.

So there's my 2 cents, I'm not trying to convince anyone of anything here, so feel free to disagree. I'm just interested in sharing thoughts.


 
G

Guest

Guest
"The CPU is the bottleneck in video rendering far more than the GPU is the bottleneck in current games, if you want to compare apples and oranges."
Thats true. Video editing and rendering uses CPU power like when you use Adobe premiere. Also games dont need to use GPU. You can use software rendering to run games. Most old games Doom, DukeNukem, Decent, and Quake used software rendering which is limited by the CPU. At the end, both GPU/CPU are similar. They are both processors that will likely converge together in the near future.
 

goofy me

Distinguished
Apr 27, 2008
2
0
18,510
Terminology or Technology.

CPU's and GPU's have their own realms and also need to co-exist.
Currently they are dependent upon each other thus that is what makes a computer functionable.

AS the world of computers as we know it will regretably change in the future. Take a look at our advances in both Audio and Visual
contents. Then a hard look in Mankind/Human development concerning
the way we live from century to century.

We have many acromyns (sp?) and ways to express items of today
so chances are it will be obsolete tomorrow. Just like animals and plant life become extinct nothing is here forever.

What will the computer be called in the Future?

Likely the CPU and GPU will unite as ONE.
Will it be called a GCPU or CGPU or by any other name.
I just don't know.

At the current time we still have CPU's .....so it's not Dead.

Maybe by Nvidia's purpose in its ability to use the CPU or relating to it is Dead.
That is purely COMPANY talk.

Please let's close the subject of Inner-Company emails/memo's
which is no ones business that is only relavent to Nvidia.

Now let's have a good day people .... and help someone rather than debating some unrelated garbage that have no facts to support it.
 
G

Guest

Guest
I think what nvidia was saying is that GPU are the bottleneck and because of that they will always need faster and better GPUs, but that the CPU was dead because it is no longer needed to be faster at this momment, because as earlier posters said, a good GPU is much more important for a computer if u plan on playing any types of games at all then a CPU is("People can claim to have seen benchmarks, or read about them, but based on my personal experiences, an average CPU with a powerful GPU will always beat an average GPU paired with a powerful CPU, hands down."). A better GPU will boost your scores ALOT more then a better CPU.
 
G

Guest

Guest
Just wait and see. it's the next suspense of the computer history. =D
 

ol_beardy_hacker

Distinguished
Apr 28, 2008
3
0
18,510
This is just some simple FUD on NVidias part to get some attention.
Sure, there are several nice performance points to GPU´s.
While normal CPU´s run at around 2,5Ghz with four cores the theoretical upper limit of performance is that of a equivalent 10Ghz CPU. GPU´s on the other run at around 700Mhz utilising around 128 Stream Prozessors, if you multiply that you get close to 90Ghz.
Ok, so far it might seem a GPU has a great deal more potential than a CPU (NVidia itself claims a 8800GT is around 5 times as fast as a modern intel quadcore).
And yes, the GPU´s instruction set is Touring complete, so at least in theory it is a general purpose processing unit.

But lets not get too excited, lets have a look at the practice.
Multiplying the number of cores with the frequency is an ideal upper limit if the program can be sufficiently paralellised. As the saying goes, a woman can bear a child in 9 month, but 9 women cant bear a child in one month. The problem is a lot more prevelant with stream units, as ALL stream units perform the SAME task, just at different memory locations (i´m not even stepping into the fact that GPU´s have a highly specialised reduced instruction set aimed towards gfx math, e.g. an inverse square root function, but no normal sqare root).

Or, to be even more specific, if you have a construct like

for (int i=0;i
 

ol_beardy_hacker

Distinguished
Apr 28, 2008
3
0
18,510
for (int i=0;i<LARGE_NUMBER;i++){
doSomething(i);
}

it depends on the doSomething function on what happens.
Is doSomething(i) dependend on previous calculation?
If so, it is a single core application, a quad core would loose to higher frequenzy single core and a gfx card would loose to an P3.
Is doSomething(i) independant of previous calculation but nested (e.g. several if statements)?
A multicore application can be written to utilise all cores, a GPU would still loose big time here, although it might gain some benefits from highly optimised code.
Is doSomething(i) independant of previous calculations and has an unconditional flow of instructions?
Here and only here can a GPU utilise all of its stream units to gain a large performance boost.

Maybe its easier to understand if you think about what it takes to transform (e.g. rotate and move) all coordinates in a wireframe consisting of 1 point, 8 (cube), 128 (more complex wirefrime) and 10000 (something complex) points.
A single core requires calling the same function 1,8, 128 and 10000 times.
A quad core would require calling the same function 1,2,32 and 2500 times.
A 128-stream unit processor would only need to call the same function 1,1,1 and 79 times respectivly,
but loose to an old P3 at chess.

As to those people saying a good gfx card has more impact in gaming performance, sure. Most games in the 5th or 10th generation now have only improved in gfx and not in gameplay, so what do you expect?
 

ol_beardy_hacker

Distinguished
Apr 28, 2008
3
0
18,510
for (int i=0;i (smaller) LARGE_NUMBER;i++){
doSomething(i);
}

it depends on the doSomething function on what happens.
Is doSomething(i) dependend on previous calculation?
If so, it is a single core application, a quad core would loose to higher frequenzy single core and a gfx card would loose to an P3.
Is doSomething(i) independant of previous calculation but nested (e.g. several if statements)?
A multicore application can be written to utilise all cores, a GPU would still loose big time here, although it might gain some benefits from highly optimised code.
Is doSomething(i) independant of previous calculations and has an unconditional flow of instructions?
Here and only here can a GPU utilise all of its stream units to gain a large performance boost.

Maybe its easier to understand if you think about what it takes to transform (e.g. rotate and move) all coordinates in a wireframe consisting of 1 point, 8 (cube), 128 (more complex wirefrime) and 10000 (something complex) points.
A single core requires calling the same function 1,8, 128 and 10000 times.
A quad core would require calling the same function 1,2,32 and 2500 times.
A 128-stream unit processor would only need to call the same function 1,1,1 and 79 times respectivly,
but loose to an old P3 at chess.

As to those people saying a good gfx card has more impact in gaming performance, sure. Most games in the 5th or 10th generation now have only improved in gfx and not in gameplay, so what do you expect?

PS: neither the smaller sign nor ampersand-lt-; work
 
G

Guest

Guest
Integrating a CPU/GPU would be fine and dandy for someone who will never upgrade but I doubt it would ever catch on with the gaming crowd...you usually cycle through more than one graphics card on the same CPU.

And the second and perhaps biggest obstacle to overcome....what are you going to do with all that heat?
 
Status
Not open for further replies.