For real-world ADCs, is sampling done instantaneously, or ..

Kurt

Distinguished
Apr 9, 2004
89
0
18,580
Archived from groups: rec.audio.high-end (More info?)

Everyone,

I'm curious about real-world ADC used for professional-level audio
digitization. Specifically, I'd like to know if such ADCs are able to
determine the amplitude of an analog signal at a specific instant of
time (the sample), or if they simply "average" the amplitude value of
the analog signal for a small, finite time span ("delta t") around the
sample time?

I ask this question because, on another forum, a person is arguing
that current ADCs for high-fidelity audio essentially determine the
average amplitude between samples. Since the analog signal itself is
inherently non-linear (while averaging assumes linearity), this
introduces a small sampling error (different from the quantization
error) leading to audible -- and to some audiphiles harsh sounding --
artifacts at 44.1k sampling. He then argues that only by going to very
high sampling rates, such as 192k, will this effect become inaudible.

Obviously, I'm dubious on this claim, since I would assume audio
engineers designing real-world ADCs are able to effectively sample
"instantaneously" rather than averaging over a fairly long time
interval (such as equal to 1 divided by the sample rate). But I'll
leave it to the ADC experts here to clarify the gentleman's particular
argument.

Thanks.

Kurt
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

In article <M5wMc.152489$a24.20066@attbi_s03>,
Kurt <kurt_fonseca@hotmail.com> wrote:

> Everyone,
>
> I'm curious about real-world ADC used for professional-level audio
> digitization. Specifically, I'd like to know if such ADCs are able to
> determine the amplitude of an analog signal at a specific instant of
> time (the sample), or if they simply "average" the amplitude value of
> the analog signal for a small, finite time span ("delta t") around the
> sample time?

It's sampled at a specific instant in time.

> I ask this question because, on another forum, a person is arguing
> that current ADCs for high-fidelity audio essentially determine the
> average amplitude between samples. Since the analog signal itself is
> inherently non-linear

This is a somewhat odd statement (on his part I assume?). Signals
aren't linear or non-linear. They just are. Circuits may have
nonlinear response to signals, but the term linear just doesn't apply to
the signals themselves.

> (while averaging assumes linearity),

Another odd one. I can't see how one can say that averaging assumes
linearity.

> this
> introduces a small sampling error (different from the quantization
> error) leading to audible -- and to some audiphiles harsh sounding --
> artifacts at 44.1k sampling. He then argues that only by going to very
> high sampling rates, such as 192k, will this effect become inaudible.
>
> Obviously, I'm dubious on this claim, since I would assume audio
> engineers designing real-world ADCs are able to effectively sample
> "instantaneously" rather than averaging over a fairly long time
> interval (such as equal to 1 divided by the sample rate). But I'll
> leave it to the ADC experts here to clarify the gentleman's particular
> argument.

You are right to be dubious because most if not all ADCs have a
"sample-and-hold" circuit which this fellow apparently never heard of.
The very purpose of the circuit is to deal with the possibility of the
signal's analog level changing during an ADC conversion (which usually
takes enough time that the possibility is very real). Or, put another
way, it's there to make sampling awfully close to the sampling theory
ideal of taking a snapshot of the signal level at a specific instant in
time.

Sample-and-hold circuits are usually something along the lines of a
small on-die capacitor which may be selectively connected to either the
ADC's analog input pin or the ADC conversion circuit using low
on-resistance FETs or other switch-like circuit elements. Sampling is
performed by connecting the capacitor to the signal long enough for the
cap to charge and match the signal's voltage. The capacitor must be
sized such that the circuit's RC time constant is small enough to avoid
low-pass filtering the signal. In other words, it's usually quite
small. Which suits because it's hard to build large capacitors on chips.

To begin the actual conversion, the cap is disconnected from the signal
and connected to the ADC conversion circuit. The moment of
disconnection is the instant in time where the signal is sampled. Since
the conversion circuits typically behave a lot like op-amps in that they
do not need to draw any noticeable current from the input signal, the
capacitor will not discharge much during the conversion, holding a
stable voltage. (Obviously care must also be taken in the chip design
to reduce the sample capacitor's leakage current to the minimum
possible.)

Now, I'm not really an ADC expert. But I have worked with ADCs enough
to say that it's almost incomprehensible that anybody would produce any
kind of ADC without a sample-and-hold circuit. They're present even on
ancient slow 8-bit ADCs. 44.1 kHz audio ADCs should be no exception.
An ADC without a sample-and-hold circuit would fit into the category of
"broken by design".

--
Tim
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

On Sat, 24 Jul 2004 16:27:24 GMT, Kurt <kurt_fonseca@hotmail.com>
wrote:

>Everyone,
>
>I'm curious about real-world ADC used for professional-level audio
>digitization. Specifically, I'd like to know if such ADCs are able to
>determine the amplitude of an analog signal at a specific instant of
>time (the sample), or if they simply "average" the amplitude value of
>the analog signal for a small, finite time span ("delta t") around the
>sample time?

All AD converters manufactured in the last decade use a very high
sampling rate, at least 64 times the base sampling rate. Even my
PC sound card samples at 6.144 kHz and the stream of digits will
get down converted to 96 kHz.
>
>I ask this question because, on another forum, a person is arguing
>that current ADCs for high-fidelity audio essentially determine the
>average amplitude between samples.

In order to reduce the sample rate some low pass filtering must be
applied to the data stream to avoid aliasing. Low pass filtering
always results in some kind of averaging, but that differs from
calculating a "statistical" average.

Norbert
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

On Mon, 26 Jul 2004 05:56:43 GMT, Norbert Hahn
<hahn@hrz.tu-darmstadt.de> wrote:

>On Sat, 24 Jul 2004 16:27:24 GMT, Kurt <kurt_fonseca@hotmail.com>
>wrote:
>
>>Everyone,
>>
>>I'm curious about real-world ADC used for professional-level audio
>>digitization. Specifically, I'd like to know if such ADCs are able to
>>determine the amplitude of an analog signal at a specific instant of
>>time (the sample), or if they simply "average" the amplitude value of
>>the analog signal for a small, finite time span ("delta t") around the
>>sample time?

All conventional ADCs take a 'snapshot' at a precise moment in time.

>All AD converters manufactured in the last decade use a very high
>sampling rate, at least 64 times the base sampling rate. Even my
>PC sound card samples at 6.144 kHz and the stream of digits will
>get down converted to 96 kHz.

Note that this means that they take at least 64 precise 'snapshots' in
beteeen two output sample values.

>>I ask this question because, on another forum, a person is arguing
>>that current ADCs for high-fidelity audio essentially determine the
>>average amplitude between samples.
>
>In order to reduce the sample rate some low pass filtering must be
>applied to the data stream to avoid aliasing. Low pass filtering
>always results in some kind of averaging, but that differs from
>calculating a "statistical" average.

In the case of high-oversampling converters however, the output sample
is indeed an average of the 64-plus input samples which are taken for
every output sample. I'm not sure however, why anyone would consider
that to be a problem. It is mathematically the correct thing to do.
--

Stewart Pinkerton | Music is Art - Audio is Engineering
 

Kurt

Distinguished
Apr 9, 2004
89
0
18,580
Archived from groups: rec.audio.high-end (More info?)

Stewart Pinkerton wrote:
> Norbert Hahn wrote:
>> Kurt wrote:

>>> I'm curious about real-world ADC used for professional-level audio
>>> digitization. Specifically, I'd like to know if such ADCs are able
>>> to determine the amplitude of an analog signal at a specific
>>> instant of time (the sample), or if they simply "average" the
>>> amplitude value of the analog signal for a small, finite time span
>>> ("delta t") around the sample time?

> All conventional ADCs take a 'snapshot' at a precise moment in time.


>> All AD converters manufactured in the last decade use a very high
>> sampling rate, at least 64 times the base sampling rate. Even my
>> PC sound card samples at 6.144 kHz and the stream of digits will
>> get down converted to 96 kHz.

> Note that this means that they take at least 64 precise 'snapshots' in
> beteeen two output sample values.


>>> I ask this question because, on another forum, a person is arguing
>>> that current ADCs for high-fidelity audio essentially determine
>>> the average amplitude between samples.

>> In order to reduce the sample rate some low pass filtering must be
>> applied to the data stream to avoid aliasing. Low pass filtering
>> always results in some kind of averaging, but that differs from
>> calculating a "statistical" average.

> In the case of high-oversampling converters however, the output
> sample is indeed an average of the 64-plus input samples which are
> taken for every output sample. I'm not sure however, why anyone
> would consider that to be a problem. It is mathematically the
> correct thing to do.

[Playing Devil's Advocate role here.]

Huh?

As I see it, averaging the 64-plus samples which follow a non-linear
curve (amplitude versus time is non-linear in audio signals) will
yield a sampling value, but that sampling value will not correspond
to the instantaneous value at the exact midpoint of the time range --
it will reflect some value away from the exact midpoint in time. This
then leads to an error. How much the error is depends upon the
delta-t represented by the averaged samples and now close to linear
the amplitude is in that delta-t. (Looking at this another way, an
"average" is not necessarily the same as a "mean", e.g., let's
determine the amplitude of the crest of a sine wave using this
n-samples over delta-t on both sides of the crest.)

The other question is why the need for high-oversampling anyway? If
it is possible to nearly instantaneously sample at any point, why
not just use that value corresponding to the final sampling time?
Or is this oversampling done to minimize random hardware errors?

The last question: Is there a spec which, for any ADC device, gives
an estimate of the error of sampling (besides quantization error)
based on the various real-world effects in trying to determine the
instantaneous amplitude of a signal at any moment in time? In
addition to the linear/non-linear averaging (as noted above), there
are no doubt other sources of error (e.g., the sample-and-hold
capacitor, during charging, cannot exactly follow the voltage --
there is a small lag there.) I find this "faith" that ADC will
determine the exact value (within bit-roundoff-error) of a signal at
any instant of time to be somewhat troubling. If the designers of ADC
considered all the sources of errors and minimized them by clever
design to be below the bit-roundoff-error (quantization error), then I
have no difficulty. But in what I've heard so far, I do not get warm
fuzzies that anyone here knows for sure that professional-grade ADCs
get the sampling error due to circuit design below that of the
unavoidable bit-roundoff-error. How do we know that a sample which
should be +123456 (when rounded off, assuming 24-bit here) will not be
sampled instead as +123458 due to these various real-world hardware
errors?

Again, just playing Devil's Advocate here.

Kurt
 

marcus

Distinguished
Aug 1, 2001
46
0
18,580
Archived from groups: rec.audio.high-end (More info?)

Most audio ADCs (at relatively low sampling rates) are based on the
oversampling sigma-delta structure. There are no explicit
sample-and-holds.
But many devices are switched-capacitor circuits, involving implicit
samplings.

Most other ADCs are prceded by SHs. When the clock gives the command
to
freeze the input, the actual sampled input is often of a slightly
different moment, introducing the aperture error. The ADC core
effectively sees a
held DC signal.

There were separate SH ICs (like the LF398). But nowadays higher
integration means SHs are combined with ADC cores and play a critical
role
(being the very first stage). SH-raelated errors are fully accounted
for
when measuring the ADC performance.
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

Kurt <kurt_fonseca@hotmail.com> wrote:

[snip]

>The last question: Is there a spec which, for any ADC device, gives
>an estimate of the error of sampling (besides quantization error)
>based on the various real-world effects in trying to determine the
>instantaneous amplitude of a signal at any moment in time? In
>addition to the linear/non-linear averaging (as noted above), there
>are no doubt other sources of error (e.g., the sample-and-hold
>capacitor, during charging, cannot exactly follow the voltage --
>there is a small lag there.)

There are a couple of different type of error to consider by the
designer of an ADC. You may have a look at

http://www.dcsltd.co.uk/Adcs.htm

that describes how dCS has found a solution for *one* error. A more
in-depth discussion can be found in chapter 4.3 in

http://www.hfm-detmold.de/texts/de/hfm/eti/projekte/diplomarbeiten/2004/dsdpcm/xdslindex.htm

This is a diploma thesis on the perception of differences between
PCM at 196/24 and DSD.

An older diploma thesis dealing with lower samples rates is

http://www.hfm-detmold.de/texts/de/hfm/eti/projekte/diplomarbeiten/1998/seite1.html

In depth discussion of the devices used can be found in chapter 4.

HTH
Norbert
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

On 27 Jul 2004 23:57:24 GMT, Kurt <kurt_fonseca@hotmail.com> wrote:

>As I see it, averaging the 64-plus samples which follow a non-linear
>curve (amplitude versus time is non-linear in audio signals) will
>yield a sampling value, but that sampling value will not correspond
>to the instantaneous value at the exact midpoint of the time range --
>it will reflect some value away from the exact midpoint in time. This
>then leads to an error. How much the error is depends upon the
>delta-t represented by the averaged samples and now close to linear
>the amplitude is in that delta-t. (Looking at this another way, an
>"average" is not necessarily the same as a "mean", e.g., let's
>determine the amplitude of the crest of a sine wave using this
>n-samples over delta-t on both sides of the crest.)

Sounds convincing intuitively, but you'd need to find someone with
better maths than mine to prove the point, since in practice it *does*
work, and high-oversampling converters can be shown to have
vanishingly low error levels in the top octave, where you'd expect
such problems to be very obvious.

>The other question is why the need for high-oversampling anyway? If
>it is possible to nearly instantaneously sample at any point, why
>not just use that value corresponding to the final sampling time?
>Or is this oversampling done to minimize random hardware errors?

It is indeed done to minimise random hardware errors. Basically, we
can measure time *much* more accurately than we can measure amplitude,
so a 5-bit converter (only needs to be 5% accurate) oversampled x64
and noise-shaped back to 20-bit resolution in the audio band, is
significantly more accurate than a straight 20-bit converter (needs to
be 0.0001% accurate!) can ever be.

>The last question: Is there a spec which, for any ADC device, gives
>an estimate of the error of sampling (besides quantization error)
>based on the various real-world effects in trying to determine the
>instantaneous amplitude of a signal at any moment in time?

You can certainly measure its non-linearity and monotonicity.

> In
>addition to the linear/non-linear averaging (as noted above), there
>are no doubt other sources of error (e.g., the sample-and-hold
>capacitor, during charging, cannot exactly follow the voltage --
>there is a small lag there.)

Yes, that is still the bane of professional engineers in the
measurement field, but high-oversampled single-bit converters have
much less of a problem with pedestal voltage, and it's not such a big
deal anyway with AC signals.

> I find this "faith" that ADC will
>determine the exact value (within bit-roundoff-error) of a signal at
>any instant of time to be somewhat troubling.

It's not a matter of faith - these devices do get *tested* when
they're being developed, y'know!

>If the designers of ADC
>considered all the sources of errors and minimized them by clever
>design to be below the bit-roundoff-error (quantization error), then I
>have no difficulty.

They do - that's what they get paid for. Note however that you can't
achieve a full 24-bit dynamic range, due to thermal noise. Devices
such as the dCS RingDAC can however be shown to have true 24-bit
*linearity* by using narrow-band measurements.

> But in what I've heard so far, I do not get warm
>fuzzies that anyone here knows for sure that professional-grade ADCs
>get the sampling error due to circuit design below that of the
>unavoidable bit-roundoff-error.

Ask the technical departments of Prism Sound, Apogee and dCS. These
guys are generally very forthcoming about technical details.

> How do we know that a sample which
>should be +123456 (when rounded off, assuming 24-bit here) will not be
>sampled instead as +123458 due to these various real-world hardware
>errors?

We measure it...........................

BTW, I'd be surprised if *any* real-world 21st century audio ADC was
only 16-bit accurate, as you imply above. Most can achieve 18-19 bit
accuracy with ease - which is more than enough for audio.

>Again, just playing Devil's Advocate here.

Does no harm to check these things!
--

Stewart Pinkerton | Music is Art - Audio is Engineering
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

On 7/29/04 1:21 AM, in article 4P%Nc.46126$8_6.25263@attbi_s04, "Stewart
Pinkerton" <patent3@dircon.co.uk> wrote:

> Sounds convincing intuitively, but you'd need to find someone with
> better maths than mine to prove the point, since in practice it *does*
> work, and high-oversampling converters can be shown to have
> vanishingly low error levels in the top octave, where you'd expect
> such problems to be very obvious.

I figure that a bad recording would contribute far more dissonance than even
a typical CD player. I have found that a good recording will shine through
even on a CD player that is designed for power conservation rather than high
fidelity.
 

ban

Distinguished
Apr 14, 2004
146
0
18,630
Archived from groups: rec.audio.high-end (More info?)

Norbert Hahn wrote:
>
> http://www.dcsltd.co.uk/Adcs.htm
>
> that describes how dCS has found a solution for *one* error. A more
> in-depth discussion can be found in chapter 4.3 in
>
> http://www.hfm-detmold.de/texts/de/hfm/eti/projekte/diplomarbeiten/2004/dsdpcm/xdslindex.htm
>
> This is a diploma thesis on the perception of differences between
> PCM at 196/24 and DSD.
>
> An older diploma thesis dealing with lower samples rates is
>
> http://www.hfm-detmold.de/texts/de/hfm/eti/projekte/diplomarbeiten/1998/seite1.html
>
> In depth discussion of the devices used can be found in chapter 4.
>
Very interesting results indeed, ---if you are fluent in German. The later
link is about if differences between Analog/48k/96k and 44.1/16 can be
heard. The very finest equipment and DBT has been used. BTW here you find
how to scientifically set up a DBT.
The very finest differences could only be heard with Stax headphones. Well,
the ABX-test is so sensitive to reveal very fine details. These guys found
out, 44.1/16 can be clearly distinguished from analog or 48/24. But not
48/24, it seems transparent. Again the strange thing is 96/24 is worse than
48/24, and could be recognized by very few.

They also use only very trained and young listeners, and something like a
"golden ear" seems to exists, one guy out of 150 or so test persons could
even 100% distinguish between SACD and PCM196/24. Otherwise those two
formats were absolutely identical to 146 test persons.

I hope this is translated one day, so all of us guys can make up our minds
ourselves.
THX for the links Norbert, BTW do you study in Detmold?

--
ciao Ban
Bordighera, Italy
 
G

Guest

Guest
Archived from groups: rec.audio.high-end (More info?)

On Sat, 31 Jul 2004 03:33:10 GMT, "Ban" <bansuri@web.de> wrote:

>Very interesting results indeed, ---if you are fluent in German.

I coun't find anything else in netland. Thank you very much for this
summary.

>The later
>link is about if differences between Analog/48k/96k and 44.1/16 can be
>heard. The very finest equipment and DBT has been used.

Finest equipment indeed, except for the 44.1/16 device. This was a
TASCAM DA-30 mkII DAT deck and it had a lot more noise than the
other equipment and thusly easy to distinguish from the othe
equipment. It was chosen in purpose as to have a long term pro/studio
standard for 44.1/16.

>BTW here you find
>how to scientifically set up a DBT.
>The very finest differences could only be heard with Stax headphones. Well,
>the ABX-test is so sensitive to reveal very fine details. These guys found
>out, 44.1/16 can be clearly distinguished from analog or 48/24. But not
>48/24, it seems transparent. Again the strange thing is 96/24 is worse than
>48/24, and could be recognized by very few.
>
>They also use only very trained and young listeners, and something like a
>"golden ear" seems to exists, one guy out of 150 or so test persons could
>even 100% distinguish between SACD and PCM196/24. Otherwise those two
>formats were absolutely identical to 146 test persons.
>
>I hope this is translated one day, so all of us guys can make up our minds
>ourselves.
>THX for the links Norbert, BTW do you study in Detmold?

Audio recording is my hobby. I live about 140 mi south of Detmold.

Norbert