4 Comments

In the latest round of debate about MQA, I was dismayed to see the company once again tout its endorsements from mastering engineers. This is an “appeal to authority,” a common logical fallacy. It’s often seen in ads for audio products—the advertiser uses the endorsement of an authority figure (such as a musician or recording engineer) to supplement or substitute for marketing claims based on demonstrable features and benefits. Appeals to authority are even more common in promotions for things like books, movies, and countless consumer products.

Wall Street Journal

What’s wrong with a manufacturer touting praise from an expert? A recent article in The Wall Street Journal, “Why Apple, Amazon and Spotify Are Embracing Hi-Def Music: A Guide to ‘Lossless’ Streaming,” provides a great example. Right up in the deck, the article claims that lossless streaming is “vastly improving sound quality,” and it bases this statement in large part on comments from authorities. The article leads off with an anecdote from Emily Lazar, one of the world’s best-known mastering engineers, who likens the sound of typical streaming services (presumably meaning those using lossy compression) to visiting the Louvre to see the Mona Lisa, only to find “a photocopy of a photocopy of a photocopy of the painting, shrunken down to a postage-stamp size, and then photocopied again.” The article goes on to state, “She’s describing, in effect, what happens when the gargantuan, detail-rich music files she works with get shrunken down—or compressed—for streaming.”

I’m sure that if pressed, Lazar would acknowledge the hyperbolic nature of her statement. But The Wall Street Journal didn’t press her, and instead presented her statement as an accurate assessment—because she’s an authority.

Yes, as two decades of Audio Engineering Society papers show, listeners generally express a preference for uncompressed, 16-bit/44.1kHz audio compared with MP3 encoded at 128kbps, and trained and expert listeners can usually distinguish MP3 from uncompressed 16/44.1 at the higher data rates of 192 and 256kbps. But science doesn’t support the idea that there’s a dramatic audible difference between uncompressed and lossy-coded audio at any of the data rates commonly used by music streaming services.

Now here’s the Mona Lisa comparison mentioned before, with a 5000-pixel-high JPEG image of the painting on the left (resolution reduced to fit this web page), and an 8″ by 10″ high-quality print of that image, photocopied twice, reduced to postage-stamp size (1″ by 1.25″), and then scanned and restored to the same size as the original JPEG—the closest approximation I could make of what’s described in the WSJ article. (I did the printing and copying at a FedEx Office because their machines are a lot better than mine.)

Mona Lisa

You don’t need to be an art or graphics expert to tell which image is which. You don’t need anything brighter than candlelight to see the difference. It’s an extremely misleading exaggeration—even in a tossed-off, casual comment—to suggest that lossy codecs, at commonly used bitrates, have a comparable effect.

Mastering engineers are authorities on applying the appropriate EQ, compression, etc., to make music mixes sound their best and make albums sound consistent from tune to tune. Are they authorities in controlled A/B comparison tests of audio technologies? They might be, but it’s not their job. Did the mastering engineer’s opinion, in this case, emerge from a valid test, comparing a lossy streaming service to an original file, with the identities of the sources concealed and the levels precisely matched—or just from a casual listen? We don’t know. Does a mastering engineer possess a scientist’s understanding of what constitutes a judicious and technically supportable statement? Again, they might, but it’s not what they’re paid to do.

Appeals to authority also raise questions about the authority’s motives and conflicts of interest. In this case, it’s a Wall Street Journal story that promotes non-proprietary technology, and the authority is a world-famous Grammy winner who’s obviously not hurting for work, so there’s no reason to suspect a conflict of interest. But usually, it’s wise to wonder if and how the authority was compensated for their endorsement of a product or technology. I recently heard comedian Marc Maron ask musician Steve Miller why he used Ibanez guitars—at the time, a second-tier brand—in the 1970s, and Miller replied to the effect of, “Because Gibson and Fender wouldn’t give us anything.” Clearly, Miller’s endorsement didn’t mean as much as guitarists of the time might have thought. Nor did Carlos Santana’s endorsement of Gibson . . . then Yamaha . . . then Paul Reed Smith guitars, especially when you consider that no matter which guitar he played, he always sounded like Carlos Santana (which also suggests that no matter which guitar you play, you won’t sound like Carlos Santana). Even if no money or gear is offered, musicians and audio professionals will sometimes make endorsements in large part for the promotional value.

Santana

In headphones, it’s been common for musicians and audio pros to lend their names and endorsements to specific products. But again, we don’t know the terms of those endorsements. Years ago, when it seemed every headphone company had to have a model endorsed by a hip-hop artist, a friend in the headphone industry asked me to look over a draft of an endorsement contract they were preparing for a major hip-hop artist. I wasn’t surprised to see clauses regarding how the artist’s name would be used on the headphones, and how the artist would be compensated for their endorsement—but I was surprised to see no stipulation whatsoever regarding the artist’s approval of the headphones’ sound quality.

In fact, two of the worst headphones I’ve ever heard carried musicians’ names: the Justin Bieber-endorsed Beats Justbeats and the Simon Cowell-endorsed Sony MDR-X10, of which my colleague Geoffrey Morrison wrote, “I’d put money on the fact that Simon Cowell has never heard these headphones and possibly doesn’t know of their existence. If he does, or has, that gives me serious doubts about his ears.”

Sony

Appeals to authority aren’t necessarily without merit. After all, we’d surely respect a mastering engineer’s opinion of a set of headphones more than we’d respect the opinion of some random person off the street—or perhaps even more so, the opinion of Justin Bieber or Simon Cowell. I chose an Upton double bass in large part because Lynn Seaton—one of the most technically dazzling double bassists I’ve ever heard—plays one, and I figured any bass that works for him surely wouldn’t hinder my progress. I chose the iZotope Ozone 9 mastering plug-in for my digital audio workstation in large part due to endorsements from mastering engineers—including Lazar.

But in every case where we’re told we should like an audio product or technology because some authority likes it, we need to ask ourselves how qualified that authority is to make that endorsement, what their procedures were in determining the merit of what they’re endorsing, how the endorsement might benefit them, and whether a credible pitch could be made for the product or technology without relying on an appeal to authority.

I’ll close with a phrase copped from The New York Times columnist David Brooks, who gets paid the big bucks because he can express better in 14 words what I said in more than 1000: “Who you are doesn’t determine the truth of what you say; the evidence does.”

. . . Brent Butterworth
This email address is being protected from spambots. You need JavaScript enabled to view it.

Say something here...
Log in with ( Sign Up ? )
or post as a guest
People in conversation:
Loading comment... The comment will be refreshed after 00:00.
  • This commment is unpublished.
    Ragav · 1 months ago
    Hi Brent! This the promo video for the Sony MDR X10 headphones 

  • This commment is unpublished.
    Jim Weir · 1 months ago
    Our art form is stereo reproduction.  A completely different art form from live music performance. 
    The unique feature of our art form is that the end product is an Illusion. A multidimensional illusion created in our minds from two somewhat correlated pressure variations at our ears.  The media used to distribute the art  is created by the mastering engineers, in rooms and gear completely different from our own rooms. However, the art form is the illusion created on that mastering system, biased by their expectations and life’s listening experiences, along with confidence that the gear used is competent. 
    The mastering engineer’s or producer’s rendering is as biased as those that all of us that participate in the hobby, expert or not. 

  • This commment is unpublished.
    Mark D Waldrep · 1 months ago
    Thanks Brent for clearly stating what should be obvious to music consumers. The industry is full of hype and hyperbole. MQA is a hoax just as hi-res audio is a marketing ploy by the industry to get people to buy new gear and new copies of their favorite music. I ran a survey comparing legit hi-res audio files vs. CD-spec versions of the same masters a couple of years ago. Of 500 respondents and 8000 data points, the results confirmed that the chances of picking a hi-res audio file over a CD version were no better than flipping a coin. Ms. Lazar may be a busy mastering engineer but I would venture she couldn't beat the odds in the HD-Audio Challenge II. I mastered albums for KISS, Bad Company, the SFO, and The Allman Brothers and I couldn't reliably pick out the HD version.
    • This commment is unpublished.
      Vicki Melchior · 1 months ago
      While I agree with several of Brent's ideas and disagree with others, this comment from Mark, I believe, requires a response.  First, I would think that an unsupported statement that "MQA is a hoax just as hi-res audio is a marketing ploy" is, in itself, hyperbole and opinion.  There have been numerous peer-reviewed listening tests showing the audibility of high resolution audio versus CD.  While the differences often aren't large, they increase notably when listeners undergo training.   Second, the listening test Mark conducted was based on files placed for download on the internet.  The test had no structure and no controls, so any result would not be valid by ordinary test standards.  However, Mark submitted the result to the AES as a (precis) convention paper.  The irony is that, because of the high number of participants (500), his tallies of percent correct showed exactly the opposite of what is said above.  The listeners could plainly distinguish high res from CD with very high significance!  I wouldn't overdo the importance of this given the flaws in the work and analysis, which are detailed in e-library comments by me and others.  But it should stress the importance of good testing technique and analysis before drawing conclusions on audibility.

      This isn't intended as a comment on the MQA listening tests by mastering engineers discussed in the article, which are unpublished but, as I understand it, were done extensively over multiple years. 
      • This commment is unpublished.
        Dustin · 1 months ago
        From what I understand, the best any of these studies showed was 60% correct in distinguishing between the two formats?  Not sure I would call that significant.

        As well, just because listeners could distinguish 60% of the time, I don't think there was any correlation With preference.  

        Frequency response is the single most important determinant in sound quality.  By far.  I don't think there is any evidence that demonstrates hi res can improve the frequency response.
        • This commment is unpublished.
          Vicki Melchior · 1 months ago
          Taking your points separately:

          Read the meta-analysis of high resolution testing by Josh Reiss, which covers some 80 papers, around 20 of which are used in the meta-analysis.  Percent correct data (fig. 2) extends to ~ 70-77% with listener training.  Be careful in terminology.  "Significance" is different than how big the percent correct value is.  A 60% correct measurement can be very significant (high accuracy) or low in significance depending on the size of the data set.  The (averaged)  value of percent correct depends on intrinsic audibility and varies according to psycho-acoustic parameters, testing protocol, equipment and room set-up, training, etc.  Everyone recognizes that getting ~100% right means comparing a violin to a tuba.  But finer discrimination tests yield lower percent correct numbers, yet certainly show audibility.  And audibility improves with training, i.e. experience.  Reiss's paper and the ones below are open access (free to download):   https://www.aes.org/e-lib/browse.cfm?elib=18296

          You're correct that published tests in high res have been limited to discrimination tests and not (yet) preference tests.  There are various reasons.

          I disagree on frequency response.  The reason frequency response is so prominent in our thinking is that Shannon-based sampling theory and Fourier analysis have been the basis of analysis in audio for the past 70 years.  In contrast, thought in high res over the last 35 years has led to the idea that the response of the auditory system to event timing and resolution is equally, if not more, important than to frequency.  Recent research in psychoacoustics and neuroscience (cognition) supports this.  As a result, there is ongoing active interest in audio research relating to time factors.  I outlined some parts of hi res history in (a) and Stuart/Craven cover some of the neuroscience ideas in (b), along with references, but there is still more to this literature.
          a) https://www.aes.org/e-lib/browse.cfm?elib=20455
          b) https://www.aes.org/e-lib/browse.cfm?elib=20456

          • This commment is unpublished.
            Dustin · 1 months ago
            Interesting.  Thank you for replying.

            I remain sceptical, though.  I have done some ABX testing myself between CD quality songs and hi res versions of the same songs.  In most instances, I could not reliably tell the difference, with one exception.  It was with a Rush album.  I believe the song was 2112 Overture.  If I remember correctly, I scored 8/10 correct.  The hi res version sounded like it had ever so slightly more treble.  However, there was no way for me to know if this was due to the hi res version containing more hi frequency information, or if it was simply mastered that way.  In either case, I felt the difference was too small to care about.

            Do you have any conflict of interest in promoting hi res?
            • This commment is unpublished.
              Vicki Melchior · 1 months ago
              I do audio DSP and have designed a good bit of high res processing over 25 yrs. Beyond that, I'm (highly) involved in the Audio Eng. Soc. in high resolution. Does experience and advocacy imply a conflict of interest? You'll have to trust that the AES has high standards of evidence based on science and engineering and tries to maintain them throughout publications, workshops and so on. Contrary evidence and discussion are always part of good engineering.
              • This commment is unpublished.
                Dustin · 1 months ago
                I do generally trust information that comes out of AES.  I do remain sceptical of the results of this meta study, though.  I believe more study may be required to demonstrate audible benefits of hi res music.  I asked the question because I didn't know if you were involved with any companies that promote or sell hi res.  If so, then perhaps I would consider that a conflict of interest.  

                I would be curious to hear from Brent Butterworth on this topic too.
      • This commment is unpublished.
        Sarah · 1 months ago
        If people want to hear what George Massenburg had to say it's available on YouTube - and it's why people take notice of experts who are highly trained and have decades of experience:
  • This commment is unpublished.
    Jeanette L. DePatie · 1 months ago
    Hear hear!  I've had similar experiences in the video space comparing videos with slightly different encoding practices and bitrates.  The differences can be so difficult to discern and so many different things can affect the outcome.  It's really difficult to set up an apples to apples test and I would venture to guess even really good directors or cinematographers wouldn't know how to best manage those tests.  Not like a video testing engineer or compression expert.  This article makes so much sense!

Latest Comments

If this reason for the "eardrum suck" is correct, then playing some background noise should ...
@MauroI'm looking forward to hearing what you think! It's frustrating to me that so many ...
This article is (obviously) pretty biased towards Knowles.

Some informations written here are just not true ...
@Brent ButterworthThanks for replying. I will try to get one of these expensive earphones..and one of ...
Doug Schneider 12 days ago What's the Future of High-End Headphones?
@STOP_THE_GIFWhat "goif" are we talking about exactly?

DS
Please STOP the f/%?&*ing goif on top of EVERY f/$%%?&*ing page!!! It's unbearable, makes the ...
Hi Brent, I bought the Soundcore Q20s on your recommendation in the comments below back ...
@MauroI'm still a long ways from figuring that out. Earphones are different -- I would ...
This article screams: Trust the expert!
Interesting..

Brent, what's the fuss with high-end earphones?
I have never managed ...
Todd 17 days ago Technics EAH-TZ700 Earphones
Do you ever wonder HOW earphones can be this expensive? Serious question. I cannot understand ...