CD Ripping Terminology

From LinnDocs
Jump to: navigation, search
This is a LinnDocs featured article
This article is contributed by Linn
Confusion often arises when terms such as “Jitter” and “Error Correction” are used when referring to digital audio. The terminology used in CD ripping is particularly confusing as there appears to be a lack of consistency (and in some cases, accuracy) between the various software providers and user forums on the Internet. This document is an attempt to clarify some of these terms and show how they relate to the fidelity of ripped audio.

Jitter

Jitter can have many meanings in digital audio but it generally refers to a timing error of some sort. Some forms of jitter can have a considerable effect on audio quality, others are benign as long as they are below a certain level. It is worth describing three forms of jitter so that the differences between them can be seen :

1. EFM Jitter

2.Sampling Jitter

3.Read Offset Jitter

EFM Jitter

EFM (Eight-to-Fourteen Modulation) is the final part of the encoding scheme used to produce the spiral of microscopic bumps on a CD. When a CD player reads this spiral pattern, bumps will be interpreted as one logic level (e.g. ‘1’) and the spaces between the bumps as another (e.g. ‘0’). This stream of 1’s and 0’s is called the EFM signal and in an ideal case it should be identical to the original signal used to record the disc.

Unfortunately, because the CD mastering and replication processes are not perfect, errors occur in the shape and position of the bumps. These physical errors translate into timing errors in the recovered EFM signal and this is what is termed EFM Jitter. Because EFM jitter is always present, the CD reading process is designed to be immune to it – up to a point.

The EFM data stream is ‘self-clocking’, meaning that the original timing of individual ones and zeros can be recovered using a phase-locked loop (PLL). The PLL generates a bit clock that locks on to the EFM signal such that, on average, the edges of both signals coincide. The EFM signal is then re-sampled using this clock and the resulting data fed into an elastic buffer. A second fixed-frequency clock controls the rate at which data exits this elastic buffer and the disc speed is controlled to keep the average buffer occupancy at 50%. This 2-stage re-timing process completely decouples the output (i.e. audio) clock from the jittery EFM signal.

Excessive high-frequency EFM jitter levels may result in bit errors (e.g. a ‘1’ being read as a ‘0’), but these can generally be handled by subsequent error correction processes. Excessive low-frequency jitter (as may be encountered with an eccentric, or out-of-balance, disc) may exceed the capacity of the elastic buffer, and this can cause dropouts in the audio stream. One advantage of ROM drives is that they typically have large buffers that can tolerate large amounts of low-frequency EFM jitter.

EFM jitter can be measured and reported by a number of modern CD and DVD ROM drives. Plextor’s ‘PlexTools Professional’ software, for example, includes a ‘Beta/Jitter’ test that measures EFM jitter (via a Plextor drive). Nero’s ‘CD-DVD Speed’ toolkit can also reportedly measure EFM jitter from a select number of drives as part of its ‘Disc Quality’ test.

Sampling Jitter

In digital audio, each sample represents the audio signal amplitude at a single point in time. The sample rate then defines the time interval between each sample. In order to recreate the signal in the analog domain, both the amplitude and the sampling interval must be recreated with sufficient accuracy. An error in either amplitude or sampling interval will result in distortion of the original signal.

All digital-to-analog converters (DACs) use some form of oscillator to generate a clock at the required sample rate. This clock may be either free-running, or locked to some incoming digital audio stream (e.g. SPDIF). In either case, the period of the clock signal will not always be exactly the same as the original sampling period. The error between the two is termed Sampling Jitter.

Sampling Jitter can have a serious effect on audio quality and most audio equipment manufacturers go to great lengths to minimise it. It is important to note, however, that Sampling Jitter is only important at the point of conversion between the digital and analog domains. Whilst the signal is in the digital domain, the sample period is just a number, and as such has no jitter. It is impossible, therefore, for Sampling Jitter to be generated by any lossless CD ripping process. It is, however, possible for Sampling Jitter to be generated by lossy processes such as sample rate conversion.

Read Offset Jitter

The term ‘Read Offset’ requires some explanation before Read Offset Jitter can be described.

The information stored on an audio CD consists of a number of data channels. The main channel carries the audio data but there are also a number of low-bandwidth ‘subcode’ channels that carry information such as disc time and track number. The original purpose of these subcode channels was to enable simple navigation and track/time display. Because an audio CD player doesn’t need to navigate to high degree of accuracy, the resolution of these channels was limited to 1/75th of a second. This means that when audio data is requested from a CD ROM drive, it can only be accessed with an accuracy of 1/75th of a second.

The separation of audio and subcode channels introduces further uncertainty to the navigation process. Because the audio data channel undergoes more processing (error correction, buffering, etc.) than the subcode channels, there is generally an offset (in samples) between where the player thinks it is and where it actually is. In CD ripping terminology this is termed the Read Offset.

A consistent Read Offset is important for audio ripping as data is typically transferred as a sequence of small blocks rather than as one continuous data stream. If the Read Offset is consistent then all the data in the blocks will line up perfectly. If the Read Offset is inconsistent then the data in some blocks may overlap, or there may be data missing between blocks - this is Read Offset Jitter.

A ROM drive that has a consistent Read Offset is sometimes termed ‘Accurate Stream’ capable. This just means that it is capable of accessing audio data repeatably to within a single sample. Some ripping software can perform tests to determine whether a drive has a consistent Read Offset.

A drive that has an inconsistent Read Offset can still be used with certain ripping applications
which attempt to get round the problem by requesting overlapping blocks of data and performing a realignment process in software. Some ripping applications refer to this process as ‘Jitter Rejection’.

As an aside, CD-ROM discs have additional synchronisation patterns and sector headers embedded in the data channel which enable absolute navigational accuracy. It is therefore quite possible for a ROM drive to function perfectly when reading ROM discs, but still exhibit Read Offset Jitter when reading audio discs.

Error Correction

Error correction is a vital part of all CD playback systems and without it CD’s would be unusable. As mentioned earlier, the CD replication process is far from perfect, and single bit errors (e.g. a ‘1’ being read as a ‘0’) occur quite frequently. Noise and other imperfections in the read channel will also contribute to the bit error rate. The microscopic nature of the medium means that scratches, fingerprints, and other surface defects can result in prolonged bursts of erroneous bits.

The error correction system chosen for CD is called Cross-Interleaved Reed-Solomon Code (CIRC). This system adds additional bytes to the original data which allow almost all errors to be detected and for most errors to be corrected. A special interleaving technique is used to distribute the data on the disc to minimise the impact of burst errors. Burst errors of up to 4000 data bits (equivalent to approximately 2.5mm track length on a disc) can be completely corrected. CD-ROM discs have an additional layer of error correction embedded in the data stream which improves data integrity beyond that achievable with the basic CD format.

On playback, the data stream from the disc passes through a CIRC decoder which attempts to detect and correct both random bit errors and large burst errors. The CIRC decoder is typically split into two layers termed C1 and C2. The C1 decoder uses the inner layer of error correction coding to correct random errors and detect burst errors. The C2 decoder then uses the outer layer, together with information passed from the C1 decoder, to correct burst errors and random errors that the C1 decoder was unable to correct. If the C2 decoder encounters errors that cannot be corrected it will flag them for subsequent concealment (i.e. interpolation or muting).

The term ‘C2 error’ is often encountered in CD ripping, but the meaning of this term is often unclear. Some ROM drives are capable of reporting C2 error information along with the audio data and some ripping software can use this information to determine whether the retrieved audio data is valid or not. A standardised mechanism for ROM drives to report C2 error information is documented in the Multi-Media Command (MMC) standard, but this fails to provide a clear definition of what constitutes a ‘C2 error’. The only definition that makes sense, however, is: ‘An error that the C2 decoder was unable to correct’.

Two questions often arise in relation to CD error correction:

1. Is the CIRC error detection/correction process perfect?

2. What happens if an uncorrectable error is encountered?

Is the CIRC error detection/correction process perfect?

No practical error detection system can detect all possible error patterns. It will always be possible for random errors to mimic ‘good data’ which then appears correct to the error detection system. A good error detection system will try to minimise the probability of an error going undetected, but this probability will never be zero. The likelihood of the CIRC system missing an error depends on the average bit error rate (BER). At a BER of 10-3 (i.e. 1 error in every 1000 bits), there will be less than one undetected error every 750 hours (i.e. one bad sample in 750 CD’s). A new disc will typically have a BER of around 10-4 and at this rate the probability of an undetected error is negligible.

The error correction capabilities of CIRC are limited by the inherent capabilities of the code, and by the implementation of the decoder. Even though CIRC is theoretically capable of correcting large burst errors it can still be defeated by real life discs. These tend to have a variety of error sources that, when combined, compromise the effectiveness of the CIRC encoding scheme and make uncorrectable errors more likely. The way in which the decoder is implemented also has an effect on error correction performance. Not all decoders exploit the full capabilities of CIRC, and some may only do so when running at low speeds. The C2 decoding process, for example, is capable of correcting up to 4 erroneous bytes in every 24, but some implementations can correct only 2.

What happens if an uncorrectable error is encountered?

The CIRC system was designed to work in conjunction with error concealment processes so that a ‘reasonable’ audio signal could be maintained even in the presence of uncorrectable data errors. The interleaving process used to encode the data means that even if samples are lost, the samples immediately adjacent to the missing samples are likely to still be present. This enables an interpolation algorithm to calculate replacement samples and hence ‘mask’ the error. Obviously, such an interpolation process can never exactly reconstruct the original data but it does make small errors much less noticeable.

CD players will perform interpolation automatically, often without the user being aware of it. Most ROM drives, however, will not perform interpolation, (although some may provide it as an option), so it is the responsibility of the ripping software to decide what (if any) concealment should be done. Software concealment relies on accurate C2 error reporting by the ROM drive, but again not all ROM drives support C2 error reporting. This inconsistent approach by ROM drive manufacturers has prompted the development of ‘paranoid’ ripping software. These applications use a number of techniques such as checksums and multiple reads to try to minimise the likelihood of erroneous data. Unfortunately, the physical nature of most uncorrectable errors means that they often persist no matter how many times the disc is read.

Recommendations for achieving accurate rips

When choosing a ROM drive for audio CD ripping there are two features worth looking out for:

1. ‘Accurate Stream’ – This feature means that the drive should be capable of sample-accurate reads and hence should have no Read Offset Jitter.

2. ‘C2 Error Flags’ – This feature means that the drive should be capable of flagging uncorrectable data errors.

Both of these features will help the ripping software make quick decisions about the integrity of the audio data. The Accurate Stream feature removes the need for overlapping reads and can significantly improve rip times. C2 error flagging removes the need for secondary error checking processes and can be used to initiate specific re-reads or concealment processes when data errors occur.

Changing the speed at which a disc is read can often have an effect on disc reading performance. The servo systems that control the focus and tracking of the laser pick-up have a finite bandwidth and can only correct disturbances below a certain frequency. Reducing the spin-speed of the disc reduces the frequency of these disturbances and hence improves reading accuracy. Reading at a slower speed may also mean that the drive has time to employ a more thorough error correction strategy.

Somewhat perversely, increasing the spin speed can also sometimes improve reading accuracy. This can occur when surface defects cause the servo systems to deviate from the optimum scanning position. In such cases increasing the spin speed can increase the frequency of the disturbance to a point where the servos no longer respond to the defect.

If the quality of a particular rip is in doubt then a simple checksum operation can be performed on the data and compared with other rips of the same disc. Some ripping applications can do the automatically via a shared database on the internet. Matching checksums, whilst they provide an extra degree of confidence, are no guarantee that the data is correct. They do however reduce the probability of errors to a very low level.


Useful links :

“The Numerically-Identical CD Mystery: A Study in Perception versus Measurement”

“Introduction to Error-Control Coding”

T10 Working Drafts – Various standards pertaining to ROM drive interfaces, in particular the Multi-Media Command set (MMC)

AccurateRip – an online database of rip checksums