If you are not aware of it, at this very moment, war is being waged on the consumer in terms of loudness. What do I mean by that?
Simply stated, there is a very strong push within the marketing arms of the music industry body to make music (or as many industry types refer to it, "product") seemingly more enticing by making the music louder. The idea here, and it's certainly not conjecture, is that
most consumers, when presented with two versions of a recording, will prefer the
louder version - and...by louder I mean that one will play louder for a given setting of the volume (gain) control on the preamp / receiver / mp3 player.
This phenomenon is not new to the audio community. As many of us know, when comparing speakers (or headphones for that matter), it's important to normalize one speaker's loudness to the other. In point of fact, I'm really talking about normalizing their efficiencies which then manifest themselves as loudness. For example, if when comparing two speakers, and one of them is 10 dB more efficient than the other (i.e. the SPL observed at 1m with 2.8 VRMS input of speaker "B" is 10 dB higher than that of speaker "A" under the same conditions), the tendency, especially over short listening intervals, is to prefer the louder of the two speakers, primarily because unless normalized, the speaker that is 10 dB more efficient will be perceived as twice as loud as the other.
This is human nature, and in fact, we probably all know that a speaker's efficiency is one and only one ingredient in a soup of factors that make a speaker palatable. I think we all have experienced this phenomenon at one time or another. Anyway, by normalizing the loudness (again, the efficiency) of the speakers, you're able to make a fair comparison between them because you are removing a source of bias (differences in loudness) and comparing more of the salient differences between them. In effect, you are comparing the timbre of loudspeaker "A" versus loudspeaker "B".
I'm not saying that efficiency is bad. All that I am saying is that is must be accounted for if fair comparisons are to be made. Also, I don't really want to go off-topic here, but I wanted to use the speaker analogy to make what follows a bit more understandable. So, let's get back to it.
What controls the apparent loudness (
for a given setting of your gain (volume) control) in a digital recording? basically, it comes down to a few key factors (dynamic range etc), but fundamentally, the way in which one makes one version of a given track louder (at the source) is to re-scale the digital file.
Remember, in a digital file, the maximum value (that is, all bits hi) that can be achieved is 0 dB. That's the ceiling. Thus, the signal will always be some number of dBFS below 0 dB. How much the signal (let's just call it 'music' instead) is below 0 dBFS is really up to the recording Engineer and the Producer, but ultimately, it's the mastering Engineer who holds sway (as they have the final say as to gain, EQ, effects, etc). Depending upon the music, some music will spend its life no higher than 10, 15, or 20 dB down from 0 dBFS. Mind you, there are many ways to look at the value - we could consider its peak value (which is of great concern) as well as its mean (average) value, or any number of statistical analyses of the music (i.e. crest factor, percentiles, etc) pertaining to its relationship to 0 dBFS.
Let's revisit the way-back machine when the CD format was new (some of us remember that era).
At the time, many record companies simply re-issued LP titles (in CD format) from their catalogs without doing much to the signal at all (for better or worse). At the time, it (apparently) was the norm to respect the limits of the digital medium (at least in terms of full scale) and, like a physician's first axiom, "do no harm" to the signal.
As that market changed and mp3's et al became ever prevalent in the marketplace, many of these record companies sought to find sources of 'new' revenue based on their holdings. There's nothing wrong with that - the music business is, after all, a business. However, due to the changing landscape of the industry, the rise of the internet, and most importantly, affordable recording gear for 'indie' artists, the revenue streams were quickly going dry.
Here's where the trouble starts...
Company "A" holds the rights to Artist "B's" most popular recordings, and while "The Artist" may indeed receive revenue (Royalties...pardon me if this is not the correct legal jargon) from subsequent re-issues, "The Artist" may in fact have no say in how the re-issue is remastered.
Uh oh...there's trouble afoot...
What if Company "A" sees an opportunity to make something "old" become something "new" by virtue of remastering...and what if...perhaps...Comapny "A" ' s main interest is revenue and not necessarily fidelity?
Now, before people start 'hating', please keep in mind that remastering is by its very nature not necessarily a bad thing. There are many, many exceptiojnally talented mastering Engineers out there for whom the music is their "raison d'etre" and thus, show great respect for the music, the artist, and overall, for the artistic process. However, like most of us, we work for someone...someone who ultimately signs our paychecks, and as such, there are times where though we may disagree (even vehemently) with what's been asked of us, we do it because we have to do it (Milgram had something to say about this, but I digress).
Back to Company "A"...
So, Company "A" realizes a few things about the present market conditions... First, most people just don't care about fidelity, or at least, this is what they hold as a belief. Second, that by making the "old" become "new", the profit margin is huge - all the costs associated with having developed the original "product" have long since been recovered - coutless times over in fact.
So, Company "A" opens up the valuts and pulls from it a version of Artist "B" 's recording, and they notice that in order to get to a 'reasonable' (read "marketable") listening level with this version, the volume control has to be advanced rather high. The light bulb goes off...and the mastering engineer is instructed to raise its apparent loudness (or more likely "
just...make it louder...").
So, the original version, which may have been issued with its music being - 15 dBFS is raised in magnitude by 15 dBFS, effectively utilizing all of the bits in the medium. By itself, this isn't necessarily bad, because at this point, the axiom of "first, do no harm" has been respected. Granted, the "new" is now a bit more than "twice as loud as the original (remember -
for a given setting of the gain (volume) control), but the recording itself has not been otherwise altered; its dynamics have been left intact, no additional equalization has been done, and most importantly, no additional distortion has been caused by the process.
However...upon auditioning this first attempt...someone with control in Company "A" says ... "I like it, but can you make it a bit
more [insert adjective of choice here]?"
This is where the real trouble starts. If the Re-mastering Engineer stands on principle and says "anything else that I could do to it would alter it, and most likely, not for the better". Mind you, I am not saying that remastering, with fidelity as the ultimate goal is a bad thing - not at all. What I am saying is that taking existing great recordings, driving them into distortion (literally, driving the digital medium into distortion - the very thing that digital was envisioned and engineered to
prevent), and level-compressing their dyanmic range...well...those things are, in my opinion, bad.
Regrettably, more and more of this is occurring. I have noticed (in my own CD collection) that certain remastered issues of some of my favorite CDs actually sound worse than the originals, and often times for this very reason. But don't take my word for it - there is plenty of grist for this mill (i.e. "The Loudness War") out there on the web, you need only web-search, and you will find a mountain of posts and data on the subject.
Several of the websites (and if I can dig them up, I'll add them to this post) compare the original .wav files to the same song on a "remastered" disc, and in many, many instances, you can see that the remastered version has wavefors that are visibly clipped. I'm not joking - visibly clipped.
Granted, it does happen that the occasional extremely short-lived peak in some recordings does hit 0 dBFS, but that's the thing...you see, something that is visibly distorted may in fact never be heard...especially if it is a small fraction of a second as there isn't time for your ear to truly notice it (and even more so if other bits of the music will mask it). However, when the entire song spends most of its life as a clipped signal, then something is horribly wrong.
You can prove this for yourself if you have some editing tools, many of which are packaged with CD burning / ripping software (I know that Nero has an editor built in, and I suspect others such as Roxio or iTunes do as well).
So, if you have an original version of a CD and its remastered version, you can first extract ("rip") the original version of the .wav file, and then the remastered version of the wav file and compare them in the editor. If you see that they are more or less the same % full-scale, then chances are some noise reduction and EQ were the only things done during the remastering process. However, if you see that one is visibly larger (and thus louder) than the other file, then that recording has been re-scaled in an attempt to make it sound 'better' than the original by (as a minimum) making it louder as compared to the original.
Mind you...you are very likely (depending on your musical tastes - I suspect jazz and classical are less prone to these tactics, though I could be wrong) to see that some of the remastered versions of your favorite music are in fact being driven into distortion.
I have a problem with this, simply because, I like to think that the original release was how the Artist wanted his or her music to sound; if they had wanted it distorted and level-compressed, they certainly could have done so the first-go round. Granted, this is not iron-clad, and I'm also sure that there are many artists who wish that the recording technology of their day were capable of something different, but fundamentally, to me anyway, it's about respect for the artist and that vision.
I might think that the Mona Lisa would look better were she wearing more blush (just an example here...), but that doesn't give me the right to bust into the Louvre at night dressed in black and apply some to the painting just to suit my tastes. If Leonardo were around, and
he wanted to do so, then he alone has the right, but if I didn't enjoy it as much after the fact, I could choose no longer to gaze upon said painting.
The problem is pretty big though, because the "Remastered" label generally invokes a response in the customer that it must sound better than the original, because logically enough,
why would they remaster it if it was only going to make it sound worse?.
There has been a lot of talk recently in the industry (with a lot of push-back from the record companies) to label remastered versions with indicators of their fidelity. That is, things such as mean dBFS levels, crest factors (the relationship between peak and RMS levels), but so far, none of these seem to be gaining traction. Moreover, I am not sure just how many consumers would be anything but befuddled by such indictors on the packaging.
UPDATE: Here's a link that does a pretty nice job of explaining (graphically) what's going on:
http://spectrum.ieee.org/computing/software/the-future-of-music