MPEG-4 and P4

G

Guest

Guest
Okay now Tom has mentioned about the quality loss of MMX-based encoder VS. high quality IEEE based one. But this really makes me wonder to what extend of quality loss are we talking about here (all the way back to VCD quality??). I mean, if the loss is not quite noticeable then P4 with it's outstanding MMX performance could still make it a nice DVD ripper once price come down....=/
 

lakedude

Distinguished
Dec 31, 2007
271
0
18,930
I posted this on the cpu board but it is related to your post too:

"Tom waffles and will again

As you may know in Tom's original p4 review he basically said he liked the p4 because in the areas where speed is important the p4 is fast but also mentioned that its floating point unit is not the best. Then a very short time later he turns the mpg4 test into a big old floating point test and says, surprise, p4 sucks! Of course it was already known from the first review that the p4's floating point unit was not the best so naturally on a floating point intensive benchmark the p4 will not excel. This is the first waffle. The second will come when Tom gets a hold of a benchmark that uses sse2. The reason Intel does not care about the p4 fpu is because of sse2. When the p4 is tested with sse2 software it will be about twice as fast as current processors and that will render the floating point problem a non issue.Then Tom will waffle again and claim the p4 is good and fast."

I use the mmx setting and it works fine for use on a small display.
 
G

Guest

Guest
I have just completed 2 encodings of the movie Aliens. One with the MMX-iCDT option, and one with the IEEE-1880 option. Both encodings was done with DivX-Low motion at 1200 Kbit. I could not see ANY difference between the two encodings. Anyone else out there who actually can see the difference? I´ll bet that Tom did not bother to compare the 2 different encoding visually, he just just looked at the benchmark numbers !!
 
G

Guest

Guest
Thanks. I am reading it now.
Hey, that are some nice technical Emails Tom recieved there. But as Jim Quinlan pointed out : Have anybody tried to visually compare the effects of the different IDCT options?
 
G

Guest

Guest
Tom's remark about using IEEE 1180 reference quality iDCT has totally shaken the whole community. Everyone using MMX iDCT all the while. Theoritically speaking IEEE 1180 reference quality should be more accurate. But to me, I think MMX is good enough.

Doom9 did a comparison with these two standard
It can be found here. http://doom9.excelland.com/idct.htm?REC=0

If you can spot the difference, do let me know. By the way, the image has been zoomed in... and yet it is still hard to distinguish between the two, in my oppinion, they are just the same, except one takes longer and the other shorter.
 
G

Guest

Guest
There's diferences, I've taken shots and sent them to Flask's author with detailed descriptions of them and he said there was definately an issue and it would be looked into. Quite a few other people have commented on quality differences to me as well, both before, and alot more after I wrote the letter mentioning it to Tom.

I also said the difference varies from source to source. Aliens, for one, has very little difference and in fact I've done quality comparisons on it recently and you're right it's not much different. Why? Because it's not a very recent movie and if you watch you can spot the analog artifacts they still have in it from when the DVD was mastered. This tends to be actually LESS of a problem than near-perfect copies because the random noise tends to break up iDCT "errors" that cause artifacts.

I've also got one of the newer "optimized" versions and it's running close enough to MMX speed with a FPU algorithm that I don't see much of an appeal to run MMX anymore. I'm betting by the time everything's done there will be several high speed and high quality decoders to pick from, definately a new x87, 3DNow!, possibly SSE1 and SSE2, maybe a new fixed point 16bit (was mentioned by the lead coder on the AMDzone version), maybe a rewritten MMX by Flask's origional author, etc.
 
G

Guest

Guest
I tried on MIB, I selectively pick on a few scene from both encoded with MMX and FPU / Low bitrate & High bitrate / Low Motion and Fast Motion codec. I could not exactly see the difference.

I studied a number slides for a very long time and finally I should conclude, the image on MMX is slightly... very very slightly coated with a layer of transparent-white and at some area, the blocky area of the image is being mixed up with some more seriously mismatched colour whereas in FPU encoded image, the colour mismatch is not as serious as in MMX encoded image.

-- When increasing the bitrate on the MMX encoded image, it will looks better than lower bitrate FPU encoded image--

That is the only difference I have found so far. I think it is nearly indistinguishable unless I examine it very closely.

I have been studying the FlaskMPEG iDCT code for the whole night, the code itself was taken from the old version of Intel Application Notes AP922. (The newer version of Intel AP922 includes Pentium III optimization)

The initial code was identical to the new AP922 but the bottom half of the code is not very well optimized. I am wondering if I could adopt Chen Wang's algorithm, and see would it be any faster. Although the standard IEEE 1180 is more precise, I think it is too long winded to be so exact. Chen's algorithm although more approximate, but it is pretty good enough.