From: James Rogers (jamesr@best.com)
Date: Sun Oct 14 2001 - 15:20:35 MDT
On 10/12/01 6:52 AM, "Harvey Newstrom" <mail@HarveyNewstrom.com> wrote:
> James Rogers wrote,
>>
>> This isn't quite right. Adding noise increases the noise floor
>> in exchange
>> for reducing quantization distortion. The improvement in signal
>> quality is
>> more apparent than real. The human brain has an easier time
>> rejecting noise
>> than correcting quantization distortion, so adding noise to mask the
>> distortion is a cheap solution.
>
> Your comments are quite right when addressing the ability of the human brain
> to perceive the distortions. However, I was discussing a computer's ability
> to statistically analyze the low-order bits that are imperceptible to
> humans.
>
> Human brains will indeed ignore additional noise because it tends to mask
> this out. The more noise, the closer it falls to a predictable curve, and
> the easier it is to detect and ignore. The human brain does this
> automatically, and computers can be programmed to do this as well. The
> additional noise becomes filtered out by the human brain exactly because it
> becomes easier to detect and isolate.
Actually, I'm having difficulty fitting the "adding noise makes it easier to
eliminate noise" concept into the algorithm spaces that I am familiar with
that deal with these very things. There are algorithms like that, but you
can't use them in the digital domain (at least not like you are discussing)
and they don't add any information to the source signal. Some links and
references would be helpful, because what you are suggesting seems contrary
to most of the "best practices" and theory that I am familiar with but you
haven't really given me enough information to say one way or the other. It
sounds like you are suggesting some type of digital domain super-sampling
technique, which is puzzling because I don't see how it provides what you
are claiming.
Accurate noise reduction algorithms don't use statistical methods in any
meaningful sense. The very best digital noise reduction algorithms don't
add any noise at all and are very adaptive -- they don't require any real
history. There are some super-sampling techniques that inject noise to
lower the effective noise floor of the signal, but these only apply to the
analog-to-digital conversion process and are really a substitute for using
higher resolution converters (not much of a factor these days). Other than
dithering, which masks quantization distortion with noise, I can't think of
a purely digital algorithm that reduces noise by adding noise. Adding noise
certainly wouldn't make good noise reduction algorithms work any better.
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:11:22 MST