On 10/11/01 9:35 AM, "Harvey Newstrom" <mail@HarveyNewstrom.com> wrote:
>
> Also, counter-intuitively, adding noise actually makes it *easier* to
> eliminate the noise. The more noise there is, the more statistically
> predictable it becomes. Statistical analysis works better the more sample
> you have. Lots of static over a longer period of time actually makes it
> easier to eliminate the static.
This isn't quite right. Adding noise increases the noise floor in exchange
for reducing quantization distortion. The improvement in signal quality is
more apparent than real. The human brain has an easier time rejecting noise
than correcting quantization distortion, so adding noise to mask the
distortion is a cheap solution. In practice, sufficient bit depth will
eliminate quantization distortion and there is no good reason not to use
sufficient bit depth in many cases. Therefore, adding noise is only
valuable when reducing the dynamic range of the signal to a level where the
quantization would become perceptible. The noise serves only to mask the
quantization distortion, it does not correct it.
A concrete example is the audio CD, which uses 16-bit encoding. At 16-bits,
there is enough quantization distortion that the human ear can detect it.
However, most recording these days is actually done at 24-bits, which has
sufficient dynamic range that there is no detectable quantization; at
24-bits, the theoretical dynamic range is hitting the practical noise floor
of our environment at the atomic level. Therefore, the 24-bit signals are
"dithered" (the technical term) down to 16-bits, minimizing the apparent
quantization distortion by adding noise that our brains can automatically
filter out. The real effect is the reduction of the effective dynamic range
of the signal, but the apparent effect is a reduction of quantization
distortion.
Note that this is a problem specific to digital recording of analog signals.
Analog recordings have a high noise floor already (analog signals are
equivalent to about 12-bits of digital in high end analog systems) and
therefore are dithered by nature; the signal gently disappears underneath a
sea of noise. Digital conversions have a much lower "self-noise" floor, and
therefore quantization distortion is more noticeable as the system doesn't
produce enough noise to mask the quantization artifacts at the lower
extremes of its dynamic range, pretty much by definition. Dithering makes
the digital system behave like an analog system with respect to noise, while
still retaining the extremely high S/N that digital systems can support.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:13 MDT