For audiophiles, iTunes has
for all of its history been seen as the Apocalypse. The death of “physical
music”, the rise of downloads. The beginning of music files compression, and
the end of lossless data. Higher prices for a weak value. The death of the music
album as a whole, and the reign of cherry-picking songs. To be fair, iTunes
isn’t to blame for everything. The original limitations in terms of rate and
power of the first consumer-accessible internet network and the parallel
development of file sharing are the main reasons for the birth of data
compression. Among other, the .mp3 format was developed, based on a rather
accurate algorithm at the time : the idea was to delete inaudible information
contained inside the music files to drastically reduce their size, while still
offering a great and enjoyable audio listening experience. In fact, for most
people and except on really high end equipment, the difference between a
directly ripped from a Redbook CD
uncompressed .wav file (1411 kb/s) and a high-rate .mp3 file (320 kb/s) is very
hard to hear. So the original idea was great, and the ideal execution of it is
good too: a similar audio experience, more songs on originally memory-limited
portable devices. With the lowering of memory prices, storage capacities became
bigger and the need of file compression became less obvious. But marketing
teams continue to use the “up to x000000 songs on a hand-sized device”, “your
entire CD collection in your pocket”, only mentioning in very small letters
that you have to compress the files to put that much music on a player. And
while a 320 kb/s .mp3 file is enjoyable, a 64 kb/s is absolutely horrendous.
So how can you prevent the consumer from noticing that putting one’s entire collection isn’t worth it if one has to drastically alter the sound? Well, you change production techniques. I really believe everything is quite linked. As we said, the .mp3 compression algorithm is quite well-done, only deleting very subtle sound nuances: the more you compress the file, the less subtle the deleted aspects are. So you produce music that has fewer nuances. You boost bass frequencies to mask the other ones: a bass-heavy song suffers less from extreme file compression than a layered one. Less subtle music, but less damage suffered from file-compression.
And here comes the
loudness-wars. Basically, a vinyl/CD/data file was recorded and mastered at a
certain volume. Back in the 80s, when the CD format was introduced, music was
mastered quite “quietly”: mostly at about half the maximum level, meaning that,
when played back on a device, this music was “crankable”. You could turn the
volume nearly all the way up, if you wanted to, and the music still sounded
great. Lots of dynamic range, meaning that quiet parts were quiet, and loud
ones, louder. Seems logical and obvious, uh? Yeah, it is.
When portable devices became
the norm, back in the early 2000s, mastering “norms” also changed. Companies started
to master for a consumer that didn’t own an expensive CD player, but who was
going rip the CD to put the files on an iPod. But a portable device has a
limited volume playback level. So record companies decided to release re-mastered
CDs, which doesn’t mean better sounding ones. The word itself only
means that a new master was created, in this case with the base volume level
significantly louder, often flirting with the maximum one, and in lots of case
producing clipping, which means that it goes even beyond this maximum level,
inducing distortion. Not very ear-friendly, is it? Of course, these records, on
a decent CD player sound like shit. But they indeed, sound quite good on a
laptop or listen through ear buds on an iPod. There goes the excess of loudness
wars, meaning your music had to be always louder, often at the expense of
dynamic range. That’s brickwalling for ya. And it’s pretty sad that popular
music, both in a back catalogue department or new artists suffer from that. How
sad is it that the currently in print AC/DC CDs sound compressed as hell, or
that even when they target audiophiles, Universal release high-res versions of
a brickwalled mastering for their new “Blu-Ray High Fidelity Pure Audio” brand?
But things seem to change.
A few years ago, Apple introduced a “mastered for iTunes” label, explained by
a downloadable .pdf file on their site. They’re giving advices to sound
engineers, mainly not to send Apple a compressed master and to check the
dynamic range, or else the files could be refused by Apple. Yeah. Quite
surprising, it may seem.
But sometimes it’s hit or
miss. AC/DC apparently finally entered the download platforms with the samecrappy mastering as the current CD versions. But some others are really, really
good. A good example is the hot topic of the upcoming Led Zeppelin remasters.
The entire discography appeared a few months ago “labeled” as Mastered for
iTunes files, and comparisons showed that they were not issued from the current
in print 1994 George Marino CD mastering. This one is quite good in its own
right. But the MFiT are really, really another dimension. With the release ofthe same mastering as Redbook lossless files on Qobuz.com (meaning 16
bits/44kHz 1411kb/s .wav files), people seem to think that it’s probably the
same mastering that Jimmy Page used for the upcoming CD remasters. And that’s a
really, really good surprise. As long as the final remasters aren’t released,
of course, we don’t really know. We don’t know if the files were mastered with
the iTunes chart in mind. But if that was the case, it would be great news for
the music industry.
With the rise of high resolution
(meaning higher resolution than CD files, for example 24 bit/96 kHz or even 24
bit/192 kHz files) downloads, maybe mastering is becoming an art once again.
Our ears would sure be thankful.
If you want to read more about it, or to witness different perspectives : What is 'Mastered For Itunes'? , Does “Mastered for iTunes” matter to music , Mastering engineer proves “Mastered for iTunes” doesn’t ‘sound closer to the CD’
Aucun commentaire:
Enregistrer un commentaire