Archived from groups: rec.audio.high-end (
More info?)
As for your camera example, if CR did weight camera weight as heavily as you
say, they were at least making an objective judgement within that aspect,
which is fine. In point of fact, camera weight (and size) are very important
to much of the camera buying public, if not most. It is a perfectly
justifiable rating criteria. You get into something like which style of menu
or other controls are better or worse and you get into subjective evaluation
since some people will like one style while others will prefer another. Hard
to judge which is better unless the system is really bad.
As for you statement about their ratings of cars, I really don't understand
what your objection is to their methodology, so I can't comment. I do know,
that as a mainstream (non-buff) magazine they do a very good job of rating
cars for non-enthusiasts and they make their judgements based on aspects of
performance that mainstream buyers really care about without bias. (On the
other hand, there are clearly some car guys on their staff, given their
general love affair with BMW's and obvious disappointment with BMW
reliability.)
Your argument against their testing protocol is not good given the target
audience of CR. They are not testing for high-end heads. They are testing
for Mr. and Mrs. middle-America who have no other source of objective
evaluation of hi-fi components. The accuracy score plus the objective
frequency response curve they give (plus the short text evaluation) are
completely adequate for that audience. Far better than anything they will
find in a big-box stores audio department and more trustworthy than audio
salons where the snake-oil flows freely. I am curious what your source is
for the statement you make about those good-scoring Panasonic speakers that
did not sound good.
Finally I'm not sure I understand your statement: about the CR tests of old
"They even pointed out that the audibly smooth sound of a speaker, perhaps
the factor actually resulting in its high rating, was actually due to a
broad, shallow (around 1dB) dip in measured frequency response centered at
around 1000 Hz.". Using the CR rating system, flat frequency response is
rewarded with a high score. As such your statement only makes sense to me if
you mean that the near flat response of that speaker (1 dB broad dip is
basically nothing) yielded both good sound and a high score, validating
their test protocol. I assume you meant this as a compliment to CR on their
test protocol?
--
- GRL
"It's good to want things."
Steve Barr (philosopher, poet, humorist, chemist,
Visual Basic programmer)
"Gene Poon" <sheehans@ap.net> wrote in message
news
4cic.8694$0u6.1627749@attbi_s03...
> GRL wrote:
>
> > Well, you may "all" hear how awful they sound, but they do pretty good
in
> > comparison tests that Consumer Reports does and the company does manage
to
> > sell a lot of speakers.
> >
> > Ever occur to you that maybe, just maybe, they are not as bad as you
make
> > out?
> ==============================
>
> I have a difficult time taking Consumer Reports seriously for anything
> beyond what meaningful performance aspects they can measure objectively.
>
> For instance, in one test of single-lens reflex cameras, Consumer
> Reports used, as the primary Ratings factor beyond their objective
> measurements of performance (lens sharpness and flare, and shutter
> accuracy)...CAMERA WEIGHT! Never mind that convenience and versatility
> of controls are more important in something like a camera, than an ounce
> or so of weight, one way or another.
>
> In their ratings of automobiles, their ratings of individual performance
> factors sometimes do not agree with the relative overall quality ratings
> of the cars being tested. Their response, whenever challenged, has been
> that they "weight" some factors differently than others when deciding
> overall quality. Yet, even this "weighting" seems to change from one
> test to the next. At times it is almost as though they decide which one
> they like best, subjectively; then "rig" the individual performance
> factors to approximately support this judgment.
>
> Their "benchmarks" of Good, Very Good, Excellent, etc., also are
> inconsistent. Sometimes they have actually ADMITTED it; at one point in
> the late 70s or early 80s, they changed their ratings of how automobiles
> ride, in one swoop making the prior month's Good into the next month's
> Very Good. If you missed the small article that said so, you'd never
> have known.
>
> Getting back on topic, over the last couple of decades, Consumer
> Reports's loudspeaker ratings, which for their target audience distills
> to the "Accuracy Score," have seemed inadequate. Their writers do
> mention that two speakers with the same Accuracy Score may sound quite
> different, but not enough recognition is given to what causes these
> differences, since the Accuracy Score is essentially based on
> steady-state frequency response measurement in an anechoic chamber. A
> speaker could measure near-perfect in such conditions and yet have
> compressed dynamics; have horrific hangover on bass transients; extreme
> roughness in treble response with irregularities too narrow for the
> measuring methodology; and perform poorly/differently on varying
> amplifiers due to uneven impedance vs. frequency, and capacitive
> loading; and yet still have a high Accuracy Score. A couple of years
> ago, some cheap Panasonic loudspeakers built for their stack systems
> seemed have gotten their high Accuracy Scores in the anechoic chamber,
> when in real life, they actually don't sound good in normal listening.
>
> In an earlier age, during the mid 1960s, Consumer Reports staffers
> actually LISTENED to components, as well as measuring them. They even
> pointed out that the audibly smooth sound of a speaker, perhaps the
> factor actually resulting in its high rating, was actually due to a
> broad, shallow (around 1dB) dip in measured frequency response centered
> at around 1000 Hz.
>