Why rip CDs or download music at high bitrates (eg beyond 192 Kbps)?

4

2

Here are the facts as I have read them:

  1. Most humans cannot hear the difference beyond ~192Kbps (a full, scientific study would be great)
  2. CD audio is encoded at 1378.125Kbps

Okay, so the latter makes it sound like there is plenty of data available, so you can rip at 256Kbps or 320Kbps without interpolating (let alone downloading a song from an e-store which has access to the original sources at even higher than CD fidelity).

However, if most people cannot detect any difference between something sampled at 192Kbps and the same thing sampled at 224Kbps, then why do I keep seeing so many people ripping things at 256Kbps and 320Kbps? (I’m not talking about “audiophiles” who say that they can hear dog-whistles and can detect the difference between 320Kbps and 321Kbps; I mean most, normal people.) Yes, 128Kbps is noticeably different than 192Kbps, but I find that beyond 192Kbps, the speakers/earphones play a much bigger role in the sound than the bitrate.

Whenever I rip one of my CDs, I usually just rip it at 192Kbps (CBR). (Personally speaking, I consider downloading on the other hand to be different. I would download the highest bitrate that the music store offers since it would be my “master” copy, the next closest thing to the CD, and then down-sample it to 192Kbps if I need to save space—or maybe even regardless.)

Is there any tangible reason to bother going higher?

Synetech

Posted 2011-06-12T01:21:00.933

Reputation: 63 242

2I treat my digital copies of CDs as my master copy - it's not like the extra space usage is a real issue these days, so the real question is Why not? What DISadvantage is there to going for higher bitrate copies off the bat? – Phoshi – 2011-06-12T01:26:01.490

1How is space not an issue? My mother’s 512MB iPod shuffle filled up long ago. Even my 4GB Mini filled quickly. Not everyone can afford 3TB hard-drives and 128GB iPads. – Synetech – 2011-06-12T01:29:09.017

(When I mentioned 321Kbps and 1024Mbps, I was being sarcastic, sort of; I know that those are not valid MP3 bitrates, but I never said I was talking specifically about MP3s anyway—or any existing format for that matter.) – Synetech – 2011-06-12T05:52:46.603

Oh, for portable devices I'd absolutely use a fairly low bitrate, because space is a massive issue there. However, space isn't an issue on any of my less portable computers, where I keep all my music. Re-encoding a track to put it on a PMP is less hassle than re-ripping one's entire library if one ever needs higher bitrate, especially with modern software making it so very easy. – Phoshi – 2011-06-12T09:56:52.310

Answers

5

Is there a reason? No, not really.

However, I've had a few CDs with a few songs in the industrial genre (a remix by Nine Inch Nails) which have very rapid, rhythmic sounds overlaid by very chaotic sounds like an electric guitar. There is one particular section of a song which features this type of music. Unless the encoder is set to a very high bitrate the music will skip beats or stretch them out, both of which are jarring to the listener. This may be due more to the sample rate being to low rather than the encoding bitrate, but I found that it would only work well on very high lossy or lossless encoding. That said, this particular song is fairly unusual, and most people would probably consider the song indistinguishable from noise.

The outro to this song beginning shortly after 5:10 is a good indication of the type of sound which codecs seem to handle poorly. It's not exactly what I'm thinking of, but I can't remember the name of the other song. Even this YouTube video seems off to my ears, though, and it's a copy of the studio album.

Bacon Bits

Posted 2011-06-12T01:21:00.933

Reputation: 6 125

Interesting, so the actual composition of the audio could interact with the way that the encoder works (as IVA mentioned) to manifest in aberrations that become audible to the human ear, (like the way that normally-invisible infra-red or ultra-violet light becomes visible to the human eye when combined with certain pigment formulas to create the “fluorescent” look). – Synetech – 2011-06-12T03:04:43.610

Okay, so my take-away is that under certain (rare) circumstances, it may be desirable, or even necessary to encode a song at a higher bitrate to accommodate a peculiar audio pattern. So I guess, I’m fine with normally sticking with 192Kbps. (Wouldn’t 192Kbps then be sufficient if you use a lossless format—do bitrates even apply to lossless; wouldn’t it be only compression ratios instead?)

+Accept for the tangible example. – Synetech – 2011-06-12T03:07:49.520

Bitrates are somewhat nonsensical in terms of a lossless codec. At that point, only the sample rate (for FLAC, 1 Hz to 655350 Hz) and bits-per-sample (for FLAC, 4 to 32 bits per sample) matter. However, for a lossless codec, the sample rate and bits-per-sample are determined by the source audio data (usually PCM) and not the codec. If the raw audio is 44000 Hz and 16 bits per sample, so is the encoded result. Lossless is lossless.

– Bacon Bits – 2011-06-12T04:12:56.643

Right, so bitrate is for sampling and lossy codecs, but a lossless codec is a compression routine instead (though I guess lossy codecs are effective compression routines too). – Synetech – 2011-06-12T05:23:12.457

1Lossy is not compression, as such. Compression is packing, reducing the amount of empty space. If you're packing shipments of fruit and you ditch half your load to make it fit in one truck, and you claim you compressed it, you're a liar and a cheat. – djeikyb – 2011-06-12T07:02:17.870

4

Yes. The issue is not so much for a first-generation copy such as listening to the ripped file, as much as the potential quality loss when re-compressing later.

For example, imagine a file that's been compressed using a lossy algorithm such as MP3. You then use this file to, say, record a DJ mix, which is then compressed again to MP3. You email this mix file to a friend of yours who wants to transmit it over satellite or internet radio.

You're now looking at three compression-decompression steps. Each time you're going to lose different data. Even if they're all, say, 192kbps, it's going to sound a LOT worse than an original 192kbps compression.

Disks are cheap and your music collection will, in theory, last you your lifetime. Who knows if one day you'll want to use your 192kbps files (which sound fine today) as the background music for a DVD or Blu-ray Disc sometime down the road? (These compress their audio using lossy codecs as well.)

Huge hard disks are incredibly cheap these days, so storage isn't really a concern. You should archive your audio in the highest possible quality so that it's viable for uses other than direct listening in the future, by you or others.

You also point out that when purchasing music online you download the highest quality, and that you transcode accordingly for mobile use, but only rip your CDs as 192kbps. This presumes that you still keep your CDs around as the "master" copy, should you need the original data. For many people, getting rid of the CDs is the whole idea - optical media is slow and bulky compared to modern storage systems like 2.5" hard drives or flash. The representation on the disk becomes the master -- and it sounds like you already recognize the value of having the highest quality master available (versus a lower quality "working" copy).

PS: You are misusing the term "sampled" when you say "sampled at 192kbps". The sampling rate and the data rate of the compression algorithm are entirely different things. The sampling rate (the number of audio samples per second) does not change regardless of how the data is compressed.

sneak

Posted 2011-06-12T01:21:00.933

Reputation: 141

Yes, obviously if you’re getting rid of the CDs, then you want to rip at the highest possible bitrate (or just use a lossless codec), since it will then be your only “original”. I just assumed it was clear that I was talking about a working copy for use in MP3 players; I’ll edit the question.

And I said sampled because the data rate is in effect the amount of audio information present, which in turn is just how many samples are stored. The encoder is in effect sampling from the original audio file, so technically it’s a valid usage. – Synetech – 2011-06-12T06:00:49.270

1

MP3 doesn't encode all data the same, so even at 192kbps some artifacts can be introduced given specific audio input. Encoding at a higher rate mitigates this.

Ignacio Vazquez-Abrams

Posted 2011-06-12T01:21:00.933

Reputation: 100 516

Sure, but that’s why you use the latest version of the best encoder you can, so that it encodes the most relevant data. So what if a peak at one point in the song is clipped or a single cymbal crash is slightly flat? Why double the size of the file for the occasional one-off instance? – Synetech – 2011-06-12T02:11:39.150

There are people for whom even that matters. – Ignacio Vazquez-Abrams – 2011-06-12T02:14:10.437

Yes, but obviously I’m not one of them; I’m asking about non-show-off-y dog-whistle-hearing, OCD, audiophiles. – Synetech – 2011-06-12T02:24:58.697

How do you know that none of them will listen to the music? – Ignacio Vazquez-Abrams – 2011-06-12T02:29:55.110

Because I don’t lend out my iPod, and even if I did, so what? They can rip it at 1024Mbps for themselves if they like. Besides, the earphones are probably going to be the limiting factor anyway. – Synetech – 2011-06-12T02:57:09.617