DoDidDont

Distinguished
May 27, 2009
16
0
18,560
I was thinking about upgrading my 4x GTX Titan’s (2688 core) to 4x GTX Titan X for rendering with Iray, until I saw these benchmarks on TomsHardware Germany.

http://www.tomshardware.de/geforce-quadro-workstation-grafikkarte-gpu,testberichte-241759-3.html

The benchmark’s for Iray do not make sense?

The Quadro M6000 and Titan X are essentially exactly the same GPU, with exactly the same estimated single precision performance at 7 Tflops, so this does not explain why the M6000 is 3x faster than the Titan X in the iray benchmark?

It cannot be about drivers otherwise the Blender and Octane results would be exactly the same with the M6000 3x faster, but the results for Blender Octane etc are exactly as expected with the Titan X a little faster than the M6000 because of the higher clock rates.

So why is Iray different?

The Maxwell Patch must have been applied to 3ds max otherwise the Quadro M6000 and Titan X would not work in 3ds Max at all.

This can have only two possibilities,

One that the test is a rotten egg and needs to be performed again in 3ds max 2016.

or

Two, Nvidia are purposefully crippling Desktop Maxwell GPU’s in Iray to promote their Quadro Cards! Nvidia Develop Iray, so cannot manipulate the results in Blender or Octane, only in Iray they can do this because they make the GPU’s and the software.

The Iray test should have very similar results to the Blender / Octane / RatGPU and Luxmark tests, with the Titan X slightly faster than the M6000 because of the higher clock rates.

If this is the case, that Nvidia have decided to cripple Maxwell desktop cards in iray to force people to buy their extortionate Quadro cards, then I for one will not be upgrading.

Do Nvidia think that freelancers like me are going to spend around $24500 ( £16400 ) on Quadro cards instead of approx $5200 (£3500) on Titan X‘s? The majority of freelancers and small studios cant afford it, so if Nvidia are crippling the Titan X in iray, I am sure they will end up loosing tens of millions in sales from potential new sales / upgrades from people like me.

I am not going to spend an extra $19300 (£12900) on what is essentially the same GPU’s just because Nvidia want to force people to buy Quadro’s by crippling performance in Iray, so if the benchmark is not a rotten egg, and Nvidia have crippled the Desktop Maxwell cards in iray, they have lost my money, and I’m sure once the word gets around, tens of millions of other potential sales. Freelancers and studios will just stick to their older Titans.

I am not sure what the exact statistics are, but I am sure that the millions of freelancers and small studios out there out weigh the larger studios that can afford M6000’s in their workstations and render nodes, so this would be a very bad business move from Nvidia.

Does anyone else know of any benchmarks in iray comparing the Titan X to the M6000? Hopefully this benchmark is wrong, but I would not put it past Nvidia to do this.

Anyone upgraded their old Titan with a new Titan X and run benchmarks against new and old in Iray to see what the time improvements are?

If you are unhappy about Nvidia doing this, spread the word!


Update

Because the post went unsolved, I am guessing a moderator selected Random Stalkers answer as the solution thinking that Quadro's are optimised better for 3ds max performance, but we are talking about Cuda rendering, so Random Stalkers answer is not the solution and is completely wrong.

The Titan X is the fastest card currently available for Iray rendering with 3ds max. This problem was solved with a software patch and driver updates, and the original benchmark on TomsHardware De was incorrect, and misleading.

There is a big difference between viewport driver optimisation which Quadro cards are usually best at because of the optimised drivers, and Cuda optimisation, and unfortunately Random Stalker is wrongly putting the two together. The M6000 and Titan X are identical in hardware specs and the Titan X has higher clocks so renders fastest in iray.

Check the benchmarks here:-

http://www.migenius.com/products/nvidia-iray/iray-bench...

The Titan X has also been proved to be slightly faster in 3ds max for viewport performance this time around according to SpecPerf benchmarks, so the solution that I didn't pick is wrong on both counts!

If toms is letting people choose solutions that have not been solved for other people, at least make sure those people know that facts!
 

random stalker

Honorable
Feb 3, 2013
97
0
10,610
no, you are mistaken, my friend.

the problem is optimization and the intended use for both cards>
- titanx is for gamers - build for speed and for cheap computing performance - any software that falls in this category will perform about the same for both cards

- quadro is a professional card - build for stability and performance and has optimized drivers for professional use - if you use creo, solidworks, nx, catia, oil rigs renders, medical renders, physical simulations, casting simulations, and many other professional solutions, it will leave the titanx crying behind.

so that's all the magic - different build, different drivers, different use, different results.

think of it this way - you have a porsche and a pickup truck.
both are cars and can take you on a 80mph road from point a to point b in about the same time.
but, if used correctly, the truck can bring way more crap, thus, for the professionals it would be a better choice :D
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
Hi Random stalker,

I have 4x GTX Titans for rendering in iray, they perform better than the Quadro equivalent of the same generation in iray tests.

You can see this in the link below.

http://www.tomshardware.co.uk/best-workstation-graphics-card,review-32728-18.html

Blender, Octane, Lux, and Rat benchmarks all perform as expected with the old Titan being faster than the Quadro because of the higher clock rates.

In the new test with the Titan X vs the M6000 the test results are exactly as expected with blender , octane, lux etc all showing faster results than the M6000, with the exception of Iray, but Iray should have similar results. So this is clearly a software issue. Titan X and M6000 have exactly the same single precision performance, and if it were drivers causing the problem, the blender, octane results would also show the M6000 as the faster card but they do not.

Iray uses single precision, not double, so there is definitely something wrong with the results or Nvidia are crippling the Maxwell GPU's in iray.

I know the diference between Desktop and Quadro cards and their befinfits and disadvantages. We have used them at our studio in the past, and I've been using 3ds max for over 12 years, iray for the last 3 years.
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
By the way, I think its wrong to think of Quadro cards a the porche of GPUs. They are exactly the same as their desktop equivalents, the only difference is that they have higher yield rates out of the factory, so are cherry picked for Quadro Tesla cards, as they will have better stability and prolonged use. They sometimes have different output specs or color depth, but essentially the same card. The only thing you really pay for is the drivers, which supposedly give up to 30% improvement over the desktop cards for viewport performance (30 being whatever is the flagship at the time), but you end up paying over 700% for 30% viewport performance gains, that's bad business sense.

The drivers only affect viewport performance, but the Titan X and M6000 when it comes to pure number crunching using the Cuda cores should have the same identical performance, as the blender, octane results show, they do, but Iray does not yet still uses the same single precision performance, so theres definitely something wrong there.
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
Hi MrChuck72,

As far as I know the Titan-X should work in 3ds max 2016.

For it to work in 2015 you need to apply a patch, and be running SP2 upwards.

The instructions for this are in the link below.

http://blog.mentalray.com/2014/10/10/mental-ray-iray-maxwell-support-in-3ds-max-2015-sp2/

Out of curiosity, what card(s) did the Titan X replace?
 

3D-Spider-Monkey

Estimable
Apr 27, 2015
1
0
4,510
Hi,

I'm newby here.

I have been using Max for 5 years now. I also switched to iray for interior scenes. I am currently using 2x Titan Blacks. Max 2015.

I was wanting to upgrade to the Titan X, maybe 3x.

I have to agree with DoDidDonts 'logic' on this one.

I was looking benchmarks of the alternative renderers like blender, octane this morning, and it seems that iray is the only one that has bad performance using the Titan X. On the Octane bench link on their website the Titan X is clearly faster, and if you do the maths, the single precision works out about right compared to the other cards. The Titan X has exactly the same SP compute as the M6000, but scores comfortably higher than M6000, maybe because of the Turbo?


Maybe its about software optimisation? Maybe the Quadro has tailored drivers in 3ds max iray?

Anyway, I'm holding off on upgrading.

1. because there isn't any stock in the UK.
2. I'm hoping prices will drop when AMD launch their new cards.
3. I am not paying lots of money for 3x Titan X if Nvidia are going to cut its performance by a 3rd in the main rendering application I use, so will wait for this mess to be cleared up.

As DoDidDint said, I cant afford 3x M6000 to get the speed improvements I need.

Hope Tom's gets some benchmarks up soon, and hopefully showing the Titan X faster in Iray than the M6000. (which it should be)
 

MrChuck72

Estimable
Apr 27, 2015
7
0
4,510


 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
Hi McChuck72,

Did you try the patch? It should work.

If it does work, and if its not too cheeky to ask.

Would you be able to render a simple scene with iray at say 500 iterations on the Titan Black, then the same scene again at 500 iterations with the Titan X to see what the final render times are. If the Titan X has not been crippled to a 3rd of its performance, then it should be approximately 45% faster than the Titan Black.
 

MrChuck72

Estimable
Apr 27, 2015
7
0
4,510
Hi DoDidDont,

i have a 3Dmax 2015 sp3 and a 3Dmax 2016 (with a subscription) that supports Maxwell GTX970, but both doesn't support TITAN-X (the DLL in the iRay support blog doesn't help): in this moment my beautiful card is still unused. I think i have to wait that the new GPU will be implemented.
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
3D Spider Monkey, (cool name ! )

If you look at this logically, you will understand that

1. It has nothing to do with drivers, or Blender Octane, Rat, Lux results would have the same results, with the M6000 x3 faster than the Titan X, because they do not have any optimised drivers installed for the Titan X, the same as Iray.

2. Nothing to do with hardware as the M6000 and Titan X have exactly the same SP compute at 7 Tflops.

3. Nothing to do with software/driver optimisation, Cuda optimisation / rendering really doesn’t work the same way as viewport optimisation, ask a Cuda developer. The Titan X runs faster than the M6000 in the other benchmarks straight out of the box without any optimisation applied.

If Nvidia were able to optimise the M6000 so it is 3x faster than the Titan X in iray, the specs would show that the M6000 has an estimated 21 Tflops of SP compute, not 7 Tflops, but its does not.

If you look at the results you found on the Octane website, the SP compute is around 6.6 Tflops for the Titan X and M6000, so there is no way to make the M6000 3x faster in Iray, meaning logically that the Titan X has been crippled in Iray to a 3rd of its performance, not the M6000 made 3x faster. it cannot work the other way around because of hardware restrictions.

I hope as you do that this mess is cleared up, and Nvidia stop stepping on the freelancers and small studio’s trying to sell them exactly the same hardware but for almost 500% more in price with Quadro stamped on it.

If you look at the GPU development conference this year, Nvidia openly admit that the Titan X and M6000 are exactly the same card this time around, absolutely no difference in hardware, and they admit that the only difference this time is in the software, meaning that as usual you will get optimised viewport drivers for 3ds max, maya etc, but this also means if you think about the logical steps above that they have purposefully crippled the Titan X in the iray software to promote their new flagship M6000 card.

These snobbish ethics do not belong in the economic financial climate of 2015, and should be buried with the dinosaurs. Its like Apple products. I can give a huge list of recently launched apple laptops and computers that you can by the same specifications for a 3rd of the price or buy something 3x more powerful for the same price. The idea that branding makes something 3x more expensive or a luxury item like ‘Quadro’ or ’Apple’ doesn’t belong in 2015, are we still living in the 50’s?


Of course if the Iray bench mark turns out to be a DUD, then you didn’t read anything here right :- ) Look into my eyes <O> <O>

McChunk72,

I didn’t realise that, sorry. I thought it was a patch for all Maxwell cards.

I thought that may explain the poor iray result on the TomsHardware DE site, but then it would not make sense as they are clearly getting the Titan X to run somehow in 3ds max 2013, unless they didn't realise the Titan X is not recognised, and the Titan X result is actually the CPU result, but then the M6000 should be a lot more than 3x faster than a Core i7 4930K @ 4 GHz for Iray rendering.

If you haven’t, maybe you could try the patch anyway, I don’t think it will do any harm, and it will eventually have to be applied.

 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
I was having the same talk over Autodesk's 'The Area' forum. Migenius have updated their Iray Benchmarks, and they tell a very different story with the Titan X being the fastest card, slightly ahead of the M6000.

The tests were carried out in 3ds Max 2105 SP2, so I trust these results a lot more than the Toms hardware DE site, using 3ds max 2013. Maybe Toms De modified the dll library but didn't do a very good job of it.

There are still people saying the Titan X doesn't work in 2015 and that the Maxwell patch does not currently include the Titan X in its library, but the guys over at Migenius must have gotten the Titan X to work somehow, so I think its only a matter of time before there is an updated patch.

The link to the Migenius benchmark is below.

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2015
 

CCIDgfx

Estimable
May 14, 2015
1
0
4,510
Honestly I'm just trying to get a straight answer on how effective the Titan x is for rendering. Particularly for Octane Render. I am currently running a 780 with 970 for a multi GPU setup. While I would love a Quadro 6000, the TItan X is a little closer in price for me to get one. Is it worth it for me to purchase one to add into my multi GPU setup?
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
Hi CCIDgfx,

I was having the same talk over Autodesk's 'The Area' forum. Migenius have updated their Iray Benchmarks, and they tell a very different story with the Titan X being the fastest card, slightly ahead of the M6000.

The tests were carried out in 3ds Max 2105 SP2, so I trust these results a lot more than the Toms hardware DE site, using 3ds max 2013. Maybe Toms De modified the dll library but didn't do a very good job of it.

The issue was with Nvidia developed Iray, not with other renders, and the Titan X should work fine in Octane. The problem was caused by an inaccurate iray benchmark posted on Tomsharware DE that turned out to be a wrotten egg, and a few people posting about it not working in Max 2015 SP2 with the Maxwell patch applied, when they didn't even own the card yet! But all ok in the End.

The link to the Migenius benchmark is below.

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2015

Titan X benchmarks have been available now for Octane for quite some time over on the developers website, and there are plenty of GPU combo results to look though.

Link is Below. You sometimes have to wait a few seconds for the page to load completely. hope that helps.

https://render.otoy.com/octanebench/results.php

I only know of one person so far that has come forward and tested the Titan X for me in Max 2015 and 2016 using Iray. We carried out the same tests, with the same render iterations on the same scene with no changes, and its confirmed that the Titan X is approx 46% faster than the older 2688 core Titan for iray rendering.

There is however a new issue that has appeared affecting all Nvidia cards in Iray/Max 2016. It appears that the GPU scaling is not as affective as previous versions. GPU scaling on previous versions is almost perfect. One GPU will be around 40% faster than pervious versions in Max 2016, but the more GPU's you add, the worse the scaling gets.

200 iteration test on a high poly model with basic reflective floor.

Max 2015 : GPU’s Only (GTX TITAN (2688 core))

1) 49 secs
2) 25 secs 49/2 = 24.5
3) 18 secs 49/3 = 16.33
4) 14 secs 49/4 = 12.25

Max scaling difference = +/- 2 secs

Max 2016 : GPU’s Only (GTX TITAN (2688 core))

1) 30 secs
2) 20 secs 30/2 = 15
3) 16 secs 30/3 = 10
4) 15 secs 30/4 = 7.5

Max scaling difference = +/- 8 secs ( this would be minutes on a HD noise free 11,000 iteration image so quite a difference)

You can see one GPU is around 39% faster than in Max 2015, but add more GPUs, and the scaling goes way off.

So Nvidia/AutoDesk's claims that Iray 2015 in Max 2016 is approx. 2x faster, is almost true, but only if you use one GPU, more and expect similar results to previous versions.

Autodesk have been made aware of the issue and are now investigating it. A software increase of 40% would be great, upgrade to the Titann X for another 46% and you are talking some seriously sweet render times. Here's hoping Autodesk/Nvidia fix the scaling problems.
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
Autodesk support came back to me and have confirmed that there is a GPU scaling issue in 3ds max 2016, and are handing the information over to the development team. Hopefully for a fix sooner than later :) Autodesk had to wait for the necessary hardware to arrive to replicate the problem themselves.

Anyway, glad its not just my system.

I will have to continue with Max 2015 SP3 for the time being for Iray rendering. With multiple GPU's its about the same speed if not faster in some scenes than Max 2016, but at least I will be able to accurately calculate render times.

If Autodesk can get the same 39% speed increase using one GPU to work on a multiple GPU set up in Max 2016 , this would be awesome :)
 

MrChuck72

Estimable
Apr 27, 2015
7
0
4,510
Dear all,

this is my benchmark with 1 TITANX and 2 GTX980: in 3Dstudio i used a very simple scene.

MAX 2015 SP3 (512 iterations)
DualXeon (16c/32t) = 2'40''
DualXeon + TITANX = 45''
DualXeon + TITANX + GTX980 = 30''
DualXeon + TITANX + GTX980 + GTX980 = 21''

MAX2016 (512 iterations)
DualXeon (16c/32t) = 2'40''
DualXeon + TITANX = 35''
DualXeon + TITANX + GTX980 = 25''
DualXeon + TITANX + GTX980 + GTX980 = 20''
 

DoDidDont

Distinguished
May 27, 2009
16
0
18,560
I just received a message from toms hardware today saying that Random Stalkers answer is the solution?

I did not select this as the solution as its completely wrong, and solved the problem myself.

The Titan X is the fastest card currently available for Iray rendering with 3ds max. This problem was solved with a software patch and driver updates, and the original benchmark on TomsHardware De was incorrect.

There is a big difference between viewport driver optimisation which Quadro cards are usually best at because of the optimised drivers, and Cuda optimisation, and unfortunately Random Stalker is wrongly putting the two together. The M6000 and Titan X are identical in hardware specs and the Titan X has higher clocks so renders fastest in iray.

Check the benchmarks here!

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2015

The Titan X has also been proved to be slightly faster in 3ds max for viewport performance this time around according to SpecPerf benchmarks, so the solution that I didn't pick is wrong on both counts!

If toms is letting people choose solutions that have not been solved for other people, at least make sure those people know that facts!
 

Gurmukh23

Estimable
Aug 21, 2015
1
0
4,510


Mauro, how did you resolve the Titan X issue? I'm having the same problem . Windows 10, Titan X, 3dsmax 2016. Shows no Cuda cores present.