1. PCM interface.
For different digital audio subsystems, several interfaces have emerged between microprocessors or DSPs and audio devices for digital conversion. The simplest audio interface is the PCM (Pulse Code Modulation) interface, which consists of a clock pulse (BCLK), a frame synchronization signal (FS), and receive data (DR) and transmitted data (DX). On the rising edge of the FS signal, the data transmission starts with the MSB (Most Significant Bit) word, and the FS frequency is equal to the sampling rate. The FS signal then starts the transmission of the data word, and the individual data bits are transmitted sequentially, and one data word is transmitted in one clock cycle. When transmitting an MSB, the signal level is first reduced to a minimum to avoid the loss of the MSB when different data schemes are used at different terminal interfaces.
The PCM interface is easy to implement and in principle capable of supporting any data scheme and any sample rate, but requires a separate data queue for each audio channel.
2. I2S interface.
The I2S interface (Inter-ic Sound) was first used by Philips in the 80s of the 20th century for consumer audio, and was multiplexed in a signal mechanism called LRCLK (Left Right Clock) to turn the two audio signals into a single data queue. When the lrclk is high, the left channel data is transmitted; When LRCLK is low, the right channel data is transmitted. Compared to PCM, IIS is more suitable for stereo systems. For multi-channel systems, it is also possible to execute several data queues in parallel under the same BCLK and LRCLK conditions.
3. AC97 interface.
AC97 (Audio Codec 1997) is a standard jointly proposed by Intel, five PC manufacturers, including Intel, Creative Labs, NS, Analog Device, and Yamaha. Unlike PCM and IIS, AC97 is not just a data format, it is used for internal architectural specifications of audio coding, it also has control functions. The AC97 is connected to an external codec using AC-Link, which includes a bit clock (BitCLK), synchronization signal correction (SYNC), and data queues from encoding to and from the processor (SDATDIN and SDATOUT). ac'The 97 data frames start with a sync pulse and consist of 12 20-bit time periods (time periods are defined in the standard for different purpose services) and 16-bit "tag" segments, for a total of 256 data sequences. For example, time periods "1" and "2" are used to access the encoded control registers, while time periods "3" and "4" load the left and right audio channels, respectively. The "tag" segment indicates which of the other segments contains valid data. Splitting frames into time slots makes it possible to transmit control signals and audio data to 9 audio channels or convert them into other data streams via only 4 wires. Compared to IIS solutions with a discrete control interface, the AC97 significantly reduces the overall pin count. In general, the AC97 codec is available in a TQFP48 package.
Summary: PCM, I2S and AC97 have their own advantages and application ranges, for example, in CD, MD, **Walkman mostly use IIS interface, mobile** will use PCM interface, PDA with audio function mostly use the same AC97 encoding format as PC.
Audio device interfaces include PCM, I2S, and AC97, which are suitable for different applications. For audio devices, the Linux kernel includes two types of audio device driver frameworks, OSS and ALSA, the former contains DSP and Mixer character device interfaces, and uses file operations in user-space programming. The latter is based on cards and components (PCM, mixer, etc.), and alsalib is used instead of file interfaces in user-space programming.
In audio device drivers, DMA is almost always necessary, and the DMA buffer is split into segments one by one, one of which is performed at a time when the DMA operation is performed. OSS-driven blocking reads and writes have the ability to throttle and do not need to schedule traffic in user space, but it needs to write (**) and read (record) in time to avoid underflow or overflow of buffers.