Write a comment

When SoundStage! founder Doug Schneider and I created SoundStage! Solo, we decided to try some ideas that other SoundStage! Network sites had never explored. We expanded the publishing schedule, added a comments section to the articles, and -- at Doug’s suggestion, and against my hesitation -- introduced ratings. Why did Doug’s suggestion worry me? Because I’d asked the “ratings or no ratings?” question so many times since I started as a tech journalist in 1989, and I’d never really come up with a satisfactory answer.

Many, many headphones

As an editor or writer, I’ve worked with (and against) ratings systems at Video magazine, Home Theater magazine, etown.com, Sound & Vision, Wirecutter, About.com, and Home Theater Review. And those are just the ones I can remember; there may have been others. Every one of them was conceived by well-meaning editors who sought to make product comparisons simpler for readers. But every one of them created problems.

What problems could befall a ratings system? Lots. For example:

  1. Making sure all the ratings square up: a product that gets an 8 is always better than a product that gets a 7, even if the two were not reviewed by the same person.
  2. The “reviewer’s fallacy”: the tendency of reviewers to give above-average ratings to average products.
  3. Accounting for performance improvements: the 10 from two years ago might be an 8 today.
  4. Not using the whole scale: ratings on 5-point scales tend to cluster around 4, and ratings on 10-point scales cluster around 7 and 8.
  5. Alienating readers who think the product they bought deserves a 9, but you gave it a 7.
  6. Alienating manufacturers, who consider any rating below 4 on a 5-point scale, or 8 on a 10-point scale, to be the kiss of death from a marketing standpoint.
  7. Varying perceptions of what the scale means: On a 10-point scale, is a 5 average? Or is it like a grading scale in school, where 75% (or 7.5 out of 10) is average?

Some of the problems of using ratings scales were recently highlighted in “A Survey and Analysis of Consumer and Professional Headphones Based on Their Objective and Subjective Performances,” a paper by Sean Olive, Omid Khonsaripour, and Todd Welti (all of Harman Research), delivered at the October 2018 Audio Engineering Society convention in New York City. In the paper, the scientists compared the results of their research-based headphone evaluations against the subjective ratings of five different publications. “The average correlation between our objective headphone sound quality ratings and those reported by five different headphone review organizations was disappointing[ly] weak (r = 0.5),” they said. Note that a correlation of 1 would be perfect. “Agreement varied from 0.17 (PC Magazine) to as high as 0.75 (Consumer Reports).”

Sean Olive paper

I expect some audio writers will protest that they don’t care if their ratings conform to those of a scientific research program, but what’s important to realize here is that the publications’ ratings don’t conform to each other, either. Nor, in many cases, can an outside observer identify any clear trends within a single publication’s ratings, or even a single reviewer’s ratings. The less structured the review program is, the greater the likelihood that products with widely differing sonic characteristics will get the same rating -- or that very similar products will get different ratings.

Ever since we launched the ratings on SoundStage! Solo, we’ve been debating what they mean and what kinds of products should get what ratings. We realized it’s time to come up with a clear explanation of what our ratings mean, so that readers, manufacturers, and SoundStage! editors and writers can understand them clearly.

So going forward, here’s the scale we’ll use for our Sound ratings. Note that this is an absolute scale; we won’t be elevating a product’s performance score because its price is low.

  1. Dysfunctional, can’t be said to fulfill its stated purpose
  2. Poor, barely functions as a product
  3. Weak, not useless, but hard to imagine who would buy it
  4. Significantly flawed, but may have a few attributes that make it worth a look
  5. Average, but a good buy only for certain tastes/needs
  6. Average, but still worth considering
  7. Good, some flaws, but still a good buy for most people
  8. Solid performance with a couple of notable flaws; a good buy all-around
  9. Truly outstanding, with a minor flaw or two
  10. Among the very best you can buy, not necessarily perfect but very close

For our Value ratings, we’ll have a similar scale:

  1. A total, obvious rip-off no one on Earth should buy
  2. Ridiculously overpriced; no way to make a case that it’s a good deal
  3. Overpriced for what it offers; hard to think of a reason why someone would buy it
  4. Not such a good buy unless it has some feature you especially want/need
  5. An OK buy, but you can definitely find a better value
  6. A fair deal; you’re getting what you paid for but nothing more
  7. A good deal, better performance/features than you’d expect for the price
  8. A great deal, although it’s possible you could find a better one
  9. A fantastic bargain for what it offers
  10. An unbelievable steal

Just to make sure everyone understands these new scales, we’ll be adding them to all future reviews on SoundStage! Solo. Meanwhile, if something’s not clear about the way we’re rating products going forward -- or if you think you might have a better idea! -- please let us know in the comments section below.

. . . Brent Butterworth
This email address is being protected from spambots. You need JavaScript enabled to view it.

Say something here...
Log in with ( Sign Up ? )
or post as a guest
Loading comment... The comment will be refreshed after 00:00.

Be the first to comment.

Latest Comments

@K MurrTidal!
@SteveTotally agree and I miss MQA very few tracks now available on Tidal. Even with ...
Like the article explained well, there will always be subjective notions to what we as ...
@Brent Butterworth"Too much bass" is too much bass if you think that the bass is too ...
How would you compare the sound stage between the T1 and T5?
You didn’t mention anything about time domain correction which is a big benefit of MQA. ...
Although I concur with most of the critique of the Kinki Studio THR-1 in your ...
Dustin 2 months ago Do You Need a Headphone Amp?
Amps make no difference in sound quality unless they alter the frequency response, which they ...
good drive 2 months ago KZ AZ09 True Wireless Adapters
had the same setup with these (t3 plus) and when i read your review i ...
thanks Brett for a very informative and well written review. I got the S3s and ...