At 01:15 AM 5/19/98 +0000, Nicholas Bostrom wrote:
>If we are thinking of AI to interpret the output of all these
>cameras, then it will probably take at least ten, fifteen years
>before that is possible on a large scale.
It is presently feasible to identify faces, out of a set of a few dozen, in
real time, with a device costing a few hundred dollars. The Nielsen
company uses it to count television ratings with an inconspicuous set-top
box. Using such technology to retrospectively pick out who went where from
a set of surveillance camera tapes is nearly feasible today. Nobody's
bothered to develop it because it is more cheaply accomplished with humans.
The obstacle to universal surveillance right now is the cost of (a) putting
cameras on buildings, and (b) storing all the video from these cameras in
an accesible form for later analysis. Videotape is still too expensive.
Here's a neat application: the Tokyo traffic department is installing
cameras looking at intersections. The video is continuously streamed onto
a disk with enough room for 14.4 seconds. When a microphone attatched to
the camera hears the distinctive noises of screeching brakes followed by
crumpling metal, the camera records another 7.2 seconds of video, then
stops recording. The result: a video record of a collision from 7.2
seconds before, to 7.2 seconds after.
How do I know all this? I used to do computer vision research. The
Nielsen box was developed five years ago by the Sarnoff Research Center; I
talked to the guys who developed it. The Tokyo cameras are reported in the
latest issue of "Photonics".
--CarlF
Received on Tue May 19 04:09:18 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST