G
Guest
Guest
Archived from groups: rec.audio.pro (More info?)
"Monte McGuire" <monte.mcguire@verizon.net> wrote in message
news:monte.mcguire-696CC0.10275711082004@news.verizon.net
> In article <2ntrmdF4fcvkU1@uni-berlin.de>,
> "Phil Allison" <philallison@tpg.com.au> wrote:
>
>> "Monte McGuire".
>>
>>> honestly trying to find out exactly
>>> what the savings are of having optimally bad performing gear, stuff
>>> that has weaknesses, but which are just below our threshold of
>>> perception.
>>
>> ** Since when is 0.003% THD @ 10 kHz and 5 volts rms into 3 kohms
>> load just below perception ?
> I didn't claim that. You claim that it's below the threshold of
> perception. I claim that a device exhibiting such performance in a
> THD+N test might not sound completely transparent given all possible
> musical signals.
That has to be true for a number of reasons. One is that it says nothing
about frequency response, and another is relates to band-limted
measurements. However, both of these issuses beggar the question because we
all know all these things well enough to not be confused by them. It's like
saying that top speed is not the only measure of a racing car.
> The reason I introduced the concept of "optimally bad performing gear"
> is that if you use something like DBT to qualify each and every piece
> of gear, you don't know whether the performance is just barely
> satisfactory, or if it's good enough to remain inaudible with several
> such devices chained together.
You simply do a test of a number of such devices chained together, such as
tests I've done of 20 TL074 stages chained together. I've also demonstrated
another way to solve this problem at
http/www.pcabx.com/product/amplifiers/index.htm .
> You can't know unless you test each and every combination explicitly.
That's true of any test, including the *sacred* unmatched sighted
evaluation. If you wish to cut your own figurative throat Monte, be my
guest.
> For the work I do, I prefer to have a signal chain that is as good as
> possible.
Preferences are fine. However, I submit Monte that you rarely if ever even
come close to doing this.
> I make recordings through this chain, and the end of my
> signal path is the beginning of the listener's path. I'm not
> concerned with the cost so much. It's cheaper for me to use a $3 op
> amp rather than a $.50 op amp if it'll possibly give me better
> results. Not that it's only about money either; oddly enough,
> certain kinds of $.50 op amps work better than much more expensive op
> amps, and I choose those in those situations.
If you apply this thinking to a high end recording chain and run the
numbers you end up putting recording out of just about everybody's reach,
including your own. And as I've pointed out elsewhere you've ignored the law
of diminishing returns, the law of the weakest link, and the iron law of the
cosmic-sized flaws that are inherent in all modern loudspeakers and rooms.
> What I can't do is run every recording session twice (or five times or
> whatever) in order to construct rigorous listening tests.
What about the tests you use to evaluate and recommend equipment on RAP?
> I only have
> one shot at it, so I try to find out what gear has the best chance of
> working well and then use that. I suspect that people who do a lot of
> testing really can't get much real work done. If so, then I'd love to
> know how, but the only other option is to test things that are
> theorized to be relevant and make extrapolations from there to the
> actual work. How is that any more rigorous than using test equipment?
> It's still a leap, since it never refers to a completely realistic
> situation.
The problem is largely a matter of priorities and epistemology.
"Monte McGuire" <monte.mcguire@verizon.net> wrote in message
news:monte.mcguire-696CC0.10275711082004@news.verizon.net
> In article <2ntrmdF4fcvkU1@uni-berlin.de>,
> "Phil Allison" <philallison@tpg.com.au> wrote:
>
>> "Monte McGuire".
>>
>>> honestly trying to find out exactly
>>> what the savings are of having optimally bad performing gear, stuff
>>> that has weaknesses, but which are just below our threshold of
>>> perception.
>>
>> ** Since when is 0.003% THD @ 10 kHz and 5 volts rms into 3 kohms
>> load just below perception ?
> I didn't claim that. You claim that it's below the threshold of
> perception. I claim that a device exhibiting such performance in a
> THD+N test might not sound completely transparent given all possible
> musical signals.
That has to be true for a number of reasons. One is that it says nothing
about frequency response, and another is relates to band-limted
measurements. However, both of these issuses beggar the question because we
all know all these things well enough to not be confused by them. It's like
saying that top speed is not the only measure of a racing car.
> The reason I introduced the concept of "optimally bad performing gear"
> is that if you use something like DBT to qualify each and every piece
> of gear, you don't know whether the performance is just barely
> satisfactory, or if it's good enough to remain inaudible with several
> such devices chained together.
You simply do a test of a number of such devices chained together, such as
tests I've done of 20 TL074 stages chained together. I've also demonstrated
another way to solve this problem at
http/www.pcabx.com/product/amplifiers/index.htm .
> You can't know unless you test each and every combination explicitly.
That's true of any test, including the *sacred* unmatched sighted
evaluation. If you wish to cut your own figurative throat Monte, be my
guest.
> For the work I do, I prefer to have a signal chain that is as good as
> possible.
Preferences are fine. However, I submit Monte that you rarely if ever even
come close to doing this.
> I make recordings through this chain, and the end of my
> signal path is the beginning of the listener's path. I'm not
> concerned with the cost so much. It's cheaper for me to use a $3 op
> amp rather than a $.50 op amp if it'll possibly give me better
> results. Not that it's only about money either; oddly enough,
> certain kinds of $.50 op amps work better than much more expensive op
> amps, and I choose those in those situations.
If you apply this thinking to a high end recording chain and run the
numbers you end up putting recording out of just about everybody's reach,
including your own. And as I've pointed out elsewhere you've ignored the law
of diminishing returns, the law of the weakest link, and the iron law of the
cosmic-sized flaws that are inherent in all modern loudspeakers and rooms.
> What I can't do is run every recording session twice (or five times or
> whatever) in order to construct rigorous listening tests.
What about the tests you use to evaluate and recommend equipment on RAP?
> I only have
> one shot at it, so I try to find out what gear has the best chance of
> working well and then use that. I suspect that people who do a lot of
> testing really can't get much real work done. If so, then I'd love to
> know how, but the only other option is to test things that are
> theorized to be relevant and make extrapolations from there to the
> actual work. How is that any more rigorous than using test equipment?
> It's still a leap, since it never refers to a completely realistic
> situation.
The problem is largely a matter of priorities and epistemology.