herolegend :
Is Optical the best cable to use for audio? What is 6/8 channel LPCM?
LPCM stands for Linear Pulse-Code Modulation. It's a method of representing analog waveforms digitally. When applied to audio, the audio amplitude from a particular source taken at a particular instant in time is called a sample. These samples are stored sequentially without any encoding or compression.
For audio recording, the audio is captured (or sampled) by a microphone at a constant rate. For CD audio, this rate is 44,100 times per second, or 44.1Khz. For DVD quality audio (and for Dolby Digital) this rate is 48,000 times per second, or at 48Khz. Studio quality is anything higher than that, with common rates of 96Khz and 192Khz. However, nothing above 48Khz is perceivable to human ears.
Since human hearing is based off of perceived pressure, there's a massive difference in amplitude between hearing a mouse skitter across a floor, and hearing a space vehicle lift off the launch pad. Thus, in physics we measure sound intensity on a logarithmic scale using real numbers. However, computers have finite storage capabilities and cannot support an infinite range of real numbers so an audio sample must somehow be stored digitally using the two data formats supported by computers, integers and floats. The method of doing this is called linearization and requires two reference amplitudes. The first reference amplitude is zero, or no intensity. The second reference point is a floating "maximum intensity" which is the maximum signal strength as determined by the sound recording or sound playback device. Thus, the real amplitude of any audible sound source can be fitted to a point between zero and the maximum signal strength of the recording device. Digitizing this real amplitude is done by taking the real amplitude as a fraction of the maximum and converting that into a digital fraction of a digital maximum.
Imagine a real sample in real time that is 25% of the maximum perceivable by the recording device. If the device were to use 8 bit digital values to represent that same analog value, then the minimum digital value would be zero (fixed as stated above) and the maximum would be 255 (2^8 - 1). 25% of this would be 64, so the digital value of that particular sample would be 64 if 8 bit samples are used. 255 different intensity levels does not leave a lot of range for values between a mouse and a rocket, so 8 bit audio samples are rather useless and haven't been used for any purpose in decades.
If 16 bit samples are used instead, the minimum digital value would be zero, and the maximum would be 56535. This provides significantly more resolution. The same 25% intensity sample above would take the value 16,384. 16 bit samples are the standard for most recorded audio, but 24 and 32 bit samples may also be used.
This method of waveform replication is exactly what's used in .wav files. A .wav file is nothing more than a header which contains the sample rate, sample resolution (bits per sample), and number of samples, followed by the samples in sequence. This is by far the most "lossless" way of storing audio because everything is stored exactly as it was received from the analog to digital converter in the microphone. If we were so inclined, we could take 6 separate .wav files recorded from 6 different point sources and play them back through 6 different speakers and we would get 6 channel surround sound just like Dolby Digital. However, this isn't a good idea; lets find out why.
Using standard DVD audio specifications of 48Khz/16bps a one minute recording of uncompressed audio will consume:
48,000 samples per second * 16 bits per sample * 60 seconds per minute / 8 bits per byte = 5,760 KiloBytes or 768Kbps
Almost 6 megabytes for one minute of audio on a single channel. If that same method was used to store 6 channels worth of audio for a 90 minute movie, we'd have:
48,000 samples per second * 16 bits per sample * 60 seconds per minute* 90 minutes * 6 channels / 8 bits per byte = 3,110 megabytes or 4.6MBps. That's over 3 gigabytes of uncompressed audio to cram in a 4.7GB DVD.
Encoding those same 6 channels of uncompressed audio into a Dolby Digital bitstream drops the 48Khz/16bps 6 channel maximum bitrate from 4,608Kbps (4.6Mbps mentioned above) to 448Kbps maximum (Dolby Digital specification). That's about a 90% compression rate with minimal loss in quality.
Now, on to answering your question.
The protocol used to send and receive audio over optical connections (Sony / Philips Digital Interchange Format, or S/PDIF) does not allow for more than two LPCM channels to be present at the same time. It does however allow for a Dolby Digital or DTS bitstream to be sent. HDMI has similar features, but allows up to 8 LPCM channels to be present at the same time. So, when connecting a surround system via optical, the source must be in the form of a Dolby Digital or DTS bitstream, either recorded or generated in real time. When connecting a surround system via HDMI no such constraint exists, so the source can be up to 8 channel LPCM, Dolby Digital, or DTS. Most receivers will figure out which it is receiving and handle it appropriately.
The other one that I want to touch on briefly are 3.5mm speaker jacks. Each 3.5mm jack consists of three wires, one common ground and two signal wires that are sent to two separate speakers. Left/Right on one, Rear Left / Rear Right on another, Side Left / Side Right on a third, and Center / Sub on a fourth. The signals sent on these wires are simply the 6/8 LPCM channels discussed above, but converted back to analog through a digital to analog converter.
Hope that long winded explanation helped!