G
Guest
Guest
Archived from groups: rec.audio.pro (More info?)
"Natalie Drest" <mccoeyHAT@netspaceCOAT.net.au> wrote in message
news:cmkutq$113b$1@otis.netspace.net.au
> "Arny Krueger" <arnyk@hotpop.com> wrote in message
> news:QoidnefLBLWNLBbcRVn-tw@comcast.com...
>> "Bobby Owsinski" <polymedia@earthlink.net> wrote in message
>> newsolymedia-1FFCCC.08334505112004@news1.west.earthlink.net
>>
>>> You know, I've followed Dan's claims and newsgroup threads and I
>>> must admit that he presents a good case. But having done a fair
>>> amount of 192k recording (as well as recording the same program and
>>> 44.1, 48, 96 and 192k), I can tell you that everyone involved in
>>> these recordings are always very partial to the 192, especially
>>> after hearing the same program at a lower rate.
>>
>> Tell you what, Bobby. Send me as much of as any high sample rate
>> file(s) as you think you need to make your point. My *real* email
>> address is arnyk at comcast dot net .
>>
>> Comcast has a 10 meg final file size, or about 7.6 meg file size
>> limit for email attachments according to
>> http/faq.comcast.net/faq/answer.jsp?name=17627&cat=Email&subcategory=1
>> If
>> email won't handle the file size, I think I can provide you with
>> some FTP upload space and a userid and password.
>>
>> I'll downsample your sample(s) down to various far lower sample rate
>> and then upsample them back to whatever high sample rates they
>> started out at. I'll then put up a web page at www.pcabx.com where
>> people can download them from, and listen for themselves.
> Why the down/upsampling?
The purpose of the downsampling is to provide examples of what
low-sample-rate digital data formats do to high-sample-rate audio data.
The purpose of the subsequent upsampling is to provide samples that people
can compare using the same converters operating at the same sample rates.
>Why not just post the samples?
Because you can't isolate the sonic signatures of sample rates from the
sonic signature of hardware operating at different sample rates that way.
> I'm guessing editing...
No, its all about doing a comparison of just the sonic properties of digital
formats operating at different sample rates.
If you want to cut to the chase - I'll tell you what happens when you do
proper listening tests. You find out what has been shown many times - that
44/16 is actually sonic overkill. Audible artifacts of not enough data per
sample, and not enough samples per second sort of cut out when you go much
higher than about 14/38, presuming a good clean modern monitoring
environment. Ironically, substandard monitoring environments can be more
*sensitive* to high sample rate music, but that is due to artifacts that
they introduce due to their technical inadequacies.
The usual argument against tests with results like these, is that the
origional music was not pristene enough and/or that the monitoring
environment was not clean enough, or someones ears aren't good enough.
Therefore, it is helpful to get the person making the naive assertions to
provide the origional music for testing and perform the tests with their own
monitoring system, and of course use their own ears.
Here is an example of what happens when *name* people do their own tests
like these:
"George Massenburg" <gmlinc@ix.netcom.com> wrote in message
news:dc15750e.0301091707.5e40d7ce@posting.google.com
> Speaking of 'differences'. I hope that I live long enough to craft and
> demonstrate what a scientific listening/evaluation test is and what it
> isn't.
> What it isn't is what you might call the [golden-ear pantload name
> here] demonstration where this guy sits you down and plays you a
> couple of things (could be anything: the levels aren't calibrated and
> could be anywhere). [G.E.P.L.] proceeds to switch sounds for you
> saying, "O.K., listen to this. RIght, NOW listen to THIS!" (maybe he
> actually turns the monitor gain up) "Wow, that's great, huh?" And this
> other? HEY, you couldn't possibly like THAT, could you??? I mean,
> c'mon, you'd be an IDIOT not to hear the difference...
> Any test where you know which piece of gear you're listening to...any
> test that's not perfectly blindfolded and well-controlled cannot
> possibly be called scientific. As much as I don't like the downsides
> of the A-B-C-Hidden Reference it's a very useful discipline to reveal
> modest differences.
> The best listening tests demand that you objectify what you hear.
> An example of a useful, forthright listening test is the high-octave
> test suggested and implemented by Bob Katz, where he takes a 96/24
> file (presumably rich in >20kHz content), and filters it at 20kHz or
> so. Then he listens (through exactly the same hardware, and under
> exactly the same circumstances, removing conversion, to name one
> factor, as a possible variant) to see if he can tell the difference
> between the two (filtered and unfiltered) files. Can I be brave here
> and tell you the truth? Neither of us have had significant successes
> with differentiating between the samples. (Incidentally, this is a
> test that I proposed several years ago at the AES Technical Committee
> on Studio Production and Practices, and have finally implemented on
> the EdNet web site. Stay tuned.)
"Natalie Drest" <mccoeyHAT@netspaceCOAT.net.au> wrote in message
news:cmkutq$113b$1@otis.netspace.net.au
> "Arny Krueger" <arnyk@hotpop.com> wrote in message
> news:QoidnefLBLWNLBbcRVn-tw@comcast.com...
>> "Bobby Owsinski" <polymedia@earthlink.net> wrote in message
>> newsolymedia-1FFCCC.08334505112004@news1.west.earthlink.net
>>
>>> You know, I've followed Dan's claims and newsgroup threads and I
>>> must admit that he presents a good case. But having done a fair
>>> amount of 192k recording (as well as recording the same program and
>>> 44.1, 48, 96 and 192k), I can tell you that everyone involved in
>>> these recordings are always very partial to the 192, especially
>>> after hearing the same program at a lower rate.
>>
>> Tell you what, Bobby. Send me as much of as any high sample rate
>> file(s) as you think you need to make your point. My *real* email
>> address is arnyk at comcast dot net .
>>
>> Comcast has a 10 meg final file size, or about 7.6 meg file size
>> limit for email attachments according to
>> http/faq.comcast.net/faq/answer.jsp?name=17627&cat=Email&subcategory=1
>> If
>> email won't handle the file size, I think I can provide you with
>> some FTP upload space and a userid and password.
>>
>> I'll downsample your sample(s) down to various far lower sample rate
>> and then upsample them back to whatever high sample rates they
>> started out at. I'll then put up a web page at www.pcabx.com where
>> people can download them from, and listen for themselves.
> Why the down/upsampling?
The purpose of the downsampling is to provide examples of what
low-sample-rate digital data formats do to high-sample-rate audio data.
The purpose of the subsequent upsampling is to provide samples that people
can compare using the same converters operating at the same sample rates.
>Why not just post the samples?
Because you can't isolate the sonic signatures of sample rates from the
sonic signature of hardware operating at different sample rates that way.
> I'm guessing editing...
No, its all about doing a comparison of just the sonic properties of digital
formats operating at different sample rates.
If you want to cut to the chase - I'll tell you what happens when you do
proper listening tests. You find out what has been shown many times - that
44/16 is actually sonic overkill. Audible artifacts of not enough data per
sample, and not enough samples per second sort of cut out when you go much
higher than about 14/38, presuming a good clean modern monitoring
environment. Ironically, substandard monitoring environments can be more
*sensitive* to high sample rate music, but that is due to artifacts that
they introduce due to their technical inadequacies.
The usual argument against tests with results like these, is that the
origional music was not pristene enough and/or that the monitoring
environment was not clean enough, or someones ears aren't good enough.
Therefore, it is helpful to get the person making the naive assertions to
provide the origional music for testing and perform the tests with their own
monitoring system, and of course use their own ears.
Here is an example of what happens when *name* people do their own tests
like these:
"George Massenburg" <gmlinc@ix.netcom.com> wrote in message
news:dc15750e.0301091707.5e40d7ce@posting.google.com
> Speaking of 'differences'. I hope that I live long enough to craft and
> demonstrate what a scientific listening/evaluation test is and what it
> isn't.
> What it isn't is what you might call the [golden-ear pantload name
> here] demonstration where this guy sits you down and plays you a
> couple of things (could be anything: the levels aren't calibrated and
> could be anywhere). [G.E.P.L.] proceeds to switch sounds for you
> saying, "O.K., listen to this. RIght, NOW listen to THIS!" (maybe he
> actually turns the monitor gain up) "Wow, that's great, huh?" And this
> other? HEY, you couldn't possibly like THAT, could you??? I mean,
> c'mon, you'd be an IDIOT not to hear the difference...
> Any test where you know which piece of gear you're listening to...any
> test that's not perfectly blindfolded and well-controlled cannot
> possibly be called scientific. As much as I don't like the downsides
> of the A-B-C-Hidden Reference it's a very useful discipline to reveal
> modest differences.
> The best listening tests demand that you objectify what you hear.
> An example of a useful, forthright listening test is the high-octave
> test suggested and implemented by Bob Katz, where he takes a 96/24
> file (presumably rich in >20kHz content), and filters it at 20kHz or
> so. Then he listens (through exactly the same hardware, and under
> exactly the same circumstances, removing conversion, to name one
> factor, as a possible variant) to see if he can tell the difference
> between the two (filtered and unfiltered) files. Can I be brave here
> and tell you the truth? Neither of us have had significant successes
> with differentiating between the samples. (Incidentally, this is a
> test that I proposed several years ago at the AES Technical Committee
> on Studio Production and Practices, and have finally implemented on
> the EdNet web site. Stay tuned.)