James Rogers wrote:
> I think you are a bit off-base with respect to the technique and the
> technology..
>
> First of all, the correct way to do this is *not* to compress the spectrum
> to fit within the human range. With regard to this technique, the
problems
> mentioned above are valid. The correct technique is to overlay our
natural
> frequency window with shifted windows from other parts of the spectrum..
> Audio spectrum overlays are a lot more processing intensive, but they are
> much more effective at representing information outside the human range,
> and do so with little or no loss of information.
Yes, I've though of this. It has some potential for audio, as you correctly point out, because we have a robust ability to filter out unwanted audio data and concentrate on the important parts. You can't do that with video, of course, but you could set up a system that lets you switch your vision between different slices of the spectrum. However, this kind of approach is still severely limited in the amount of enhancement it can support.
With either audio or video we can easily build sensors that will collect thousands of times as much data as our natural sensors. with audio you can increase the range of frequency response by more than an order of magnitude, increase sensitivity by several orders of magnitude, and implement selective amplification and roving remote microphones. With video we can get vision of varying resolution all the way from radio frequencies to soft x-rays, in all directions, with telescopic and microscopic enhancements for large parts of the spectrum. In both cases, no possible mapping scheme is going to let you keep up with more than a small fraction of that data.
Ultimately, if you want the best senses that technology can build, you will have to resort to creating new brain structures to process the data. Otherwise you aren't significantly better off than someone with a good wearable computer and a bunch of cameras & microphones.
Billy Brown, MCSE+I
bbrown@conemsco.com