updated 2009-01-03.
Information I've found on "Machine Vision", which I have broadly defined as "Machines are sensitive to photons, the software that runs on them, and objects specifically designed to be viewed by these machines".
This includes
related pages:
I am particularly interested in real-time mobile-platform machine vision. If you find any relevant links, I'd appreciate you letting me know.
bouncing photons off the moon
Satellite Link Budget Evaluation http://www.wettzell.ifag.de/publ-cgi-bin/linkbudget.py has a list of retroreflector satellites and a on-line calculator ``calculating the absulute return signal strength in terms of photoelectrons with respect to various system, atmospheric and satellite parameters.''
LASER/PACKETS or simply XENON COMMUNICATIONS "FIRST CONTACT"by Brad Guth, July 05, 2003 http://guthvenus.tripod.com/laser-com.htm
From: Markus Imhof Subject: Re: Simple/interested electronics project for high school work experience Date: 16 Jun 2000 00:00:00 GMT X-Accept-Language: de,en Organization: Siemens AG Mime-Version: 1.0 Newsgroups: sci.electronics.design Brad Albing wrote: > > "Russ.Shaw" wrote: > > > What about modulating a laser diode led with audio pwm, and using a lens and > > photodiode for receive. It'd be interesting to see what range one could get > > at night etc. > > Hmmm... that would be a good one. If your air quality down there is better than > up here, your line of sight distance would probably be pretty good. > > Here's a variation on that that's probably not going to work, but just might. > Increase the laser power a bunch. Irrelevant :-) But what you do need for that experiment is a laser with very low opening angle, which can (in the range you're going to need for that) only be achieved with a large aperture. 'Large' in this case meaning a couple of feet, e.g. a suitable telescopic mirror. > Point it at the moon. Point a telescope at the > moon with your photo-detector installed on the eyepiece (suitably refocused to > produce an image on the photo-diode face instead of inside your eyeball). Do this > with a new or nearly new moon so you can visually spot your outbound laser-light. > Move around on the moon's surface a little so as to find a good, flat spot for > best reflectivity. Not necessary - see above. For a practical laser, the opening angle will be large enough to cover a good sized portion (if not all) of the moon's disk, so moving the laser around accomplishes pretty little. As for looking for a flat spot for good reflectivity - at an average reflectivity below 10% (moon surface is essentially black), that's pretty futile. But unless they've degraded too far by now (ask one of the astronomy ng's), at least one, maybe several of the Apollo missions left some nice high quality retroreflectors on the moon - for exactly that purpose (bouncing a laser off them for distance measurements). Bye Markus From: Roy McCammon Subject: Re: Simple/interested electronics project for high school work experience Date: 16 Jun 2000 00:00:00 GMT X-Accept-Language: en Organization: C4F MIME-Version: 1.0 NNTP-Posting-Date: Fri, 16 Jun 2000 11:38:38 CDT Newsgroups: sci.electronics.design Markus Imhof wrote: > But what you do need for that experiment is a laser with very low > opening angle, which can (in the range you're going to need for that) > only be achieved with a large aperture. 'Large' in this case meaning a > couple of feet, e.g. a suitable telescopic mirror. > > > Point it at the moon. Point a telescope at the > > moon with your photo-detector installed on the eyepiece (suitably refocused to > > produce an image on the photo-diode face instead of inside your eyeball). Do this > > with a new or nearly new moon so you can visually spot your outbound laser-light. > > Move around on the moon's surface a little so as to find a good, flat spot for > > best reflectivity. > > Not necessary - see above. For a practical laser, the opening angle will > be large enough to cover a good sized portion (if not all) of the moon's > disk, so moving the laser around accomplishes pretty little. As for > looking for a flat spot for good reflectivity - at an average > reflectivity below 10% (moon surface is essentially black), that's > pretty futile. But unless they've degraded too far by now (ask one of > the astronomy ng's), at least one, maybe several of the Apollo missions > left some nice high quality retroreflectors on the moon - for exactly > that purpose (bouncing a laser off them for distance measurements). I used to for for MacDonald Observatory. We fired a 3 Joule ruby laser at the moon. The chief scientist estimated that our average return (gathered by the 107 inch telescope) was one photon. As for finding the reflectors, it was difficult. It turned out that no one over age 20 could find the target and hold the telescope on the target. We hired high school kids as night assistants (gophers etc). It turned out, that they were the only ones that could hit the target. From: John Larkin <jjlarkin@highland_SnipThis_technology.com> Subject: Re: Simple/interested electronics project for high school work experience Date: 15 Jun 2000 00:00:00 GMT Organization: Posted via Supernews, http://www.supernews.com MIME-Version: 1.0 Newsgroups: sci.electronics.design On Thu, 15 Jun 2000 18:03:03 +1000, "Paul E" wrote: >Hi all, > >I have a couple of high school students (16 year old)at work for work >experience & are looking for suggestions for simple & interesting electronic >projects that they can build. > >Any ideas? Paul, this is kinda fun: Poke a pair of electrodes into the ground and drive with voice, music, or tones through a power amp. Some distance away, poke another pair, connected to a high-gain amp and headphones. This is earth sheet-resistance communications. It's fun, you get to go outside, you can experiment with impedances, potential line shapes, and such stuff, and you get to hear all sorts of interesting extraneous sounds. The math of the sheet resistance potentials is complex enough to lead into other stuff for those inclined. This is audio, and kids like audio. VLF receivers are interesting, too: a loop antenna and amp is all you need. Lots of cool atmospheric sounds might interest a bunch of teenagers. John
http://oil.okstate.edu/
Telephone:
ES405: 405.744.7590 (David Cary, Zhongxiu Hu, Andrew Segall)
ES404: 405.744.7687 (more research assistants)
icbmto: 36.139584 N, 97.063035 W
The Oklahoma Imaging Laboratory Engineering South 405 Stillwater OK 74078-5032
is where David Cary is working for Lucent http://www.lucent.com/netsys/worldmap/no_america/us/okm.html
Scott T. Acton, Ph.D. Associate Professor School of Electrical & Computer Engineering 202 Engineering South Oklahoma State University Stillwater, Oklahoma 74078-5032 Phone: (405) 744-5250 Fax: (405) 744-9198 Email: sacton at master.ceat.okstate.edu Oklahoma Imaging Laboratory: http://OIL.okstate.edu/
...capable of both taking and displaying pictures in color ...
The camera has a resolution of 250,000 pixels (176 x 144) and is capable of taking 16-bit color pictures that can be displayed on the 12-bit STN color display on the watch. Up to a maximum of 100 pictures can be stored on the built-in memory...
Newsgroups: rec.photo.digital Subject: Kodak DC20 : bad experience I gave back for refund my 29 days old experience with the Kodak DC20 because : + The resolution 493x373 is NOT the real resolution of the camera but a software interpolation. The real absolute per color resolution seems to be 120 lines ONLY (but you can extract b&w information at of 240 lines but 373 is software only (this explains that the images of sharp contours, like text are not sharp/crisp at all)). + I wanted to use my camera from any computer and in particular be able to transfer the pictures from my notebook running Linux and convert them to standard format. Technical protocols (serial com. and image format) was hard/impossible to get...: + I could get no answer from Kodak support (beside "here is the url for dev. plan" (when I just wanted technical protocols description and not a plan...), which I finally filled with no further informations coming from them...) + The 1mb ram which stores only 8 *uncompressed* images (thus the low quality and the 'big' resoltion obtained by software) is not enough for most of uses. + serial link transfers are too slow (even on PC at 115kps and even slower on the mac where it seems limited to 57.6kps) + you can only erase everything or nothing. + I made no more than 50 pictures before having to buy a $10 battery this is almost as expensive than using films ! ... On the positive side: + the camera is extremly light and fits into a pocket; that's great ! + the sensitivity is very high, suitable for low light indoor. + 2 japaneese folks who own Chinon ES1000 (same hardware) have been very kind and shared the informations they have ! (not any credit to give to kodak for that !) some screen shots and detailed technical infos are on my web page: http://www.lyot.obspm.fr/~dl/dc20/ now my wish list for my next digital camera: + 640x480 real resolution (ie 640x480x3 sensible areas) + extendable memory (pcmcia standard card) readable from standard card reader + standard filter ring, for close up lens and accessories... + standard published format for the images stored on cards + serial (or better parallel) port transfer of images only as a backup / convenience over fast direct cards reading. using published protocol. + maybe scsi port for real performance ? + possible erase of last shot from the camera + low cost battery (rechargeable?) / small consumption + manual exposure / bulb + auto and manual focus + TTL / exchangeable lenses (ideally it would be a 'digital back' for a camera I already own like my Canon ESO-100 ...) + less than $2000 (if it as all the above, and $500 if it misses some features) Regards dl -- Laurent Demailly * http://hplyot.obspm.fr/~dl/ * Pobox email: <dl at mail.dotcom.fr> ** job hunting !** please see http://hplyot.obspm.fr/~dl/cv.html or mail me
Date: Fri, 10 Jul 1998 13:01:26 -0400 From: Ajay Shekhawat <ajay at cedar.buffalo.edu> To: Gerd Truschinski <agni at zedat.fu-berlin.de> Cc: quickcam-drivers at crynwr.com ... I would recommend the EggCam over a QuickCam after seeing the EggCam in action. A colleague bought a refurbed EggCam (w/board) for $71 (+$8 shipping), from CompSource at http://www.c-source.com (search for "EggCam") He has been delighted with the frame rate (over 5 full frames/sec including saving to disk on Linux). The chipset of the capture board (BT848) is well supported in Linux. Ajay
Mailing-List: contact quickcam-drivers-help@crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers@crynwr.com From: Brian Scearce <bls at pathetique.com> Subject: Re: I am trying to get some specific information To: finn at sicorp.com, quickcam-drivers@crynwr.com Date: Sun, 12 Apr 1998 00:47:59 -0700 (PDT) Wilho N. Suominen JR asks: > How does Brightness, Contrast, and White level quantitatively work? I have one bit of quantitative information, and two pieces of qualitative information. > I understand that Brightness - 255 is auto exposure > and - 254 is 2.62 seconds > how does the rest of the range work. I've appended my code that turns exposure values into exposure times for the B&W QuickCam. > Contrast question: does this set a rail for the ADC or does it > do something else? > > White Level: does this set bottom, top, or median of levels for the > ADC? The docs cover this qualitatively. The CCD signal is a DC bias (fixed per device) device with an AC ripple riding on top; the ripple is proportional to the light falling on the CCD during the exposure period. "White level" is used to subtract the DC bias from the signal; this is why there is a single correct white level per Cam. "Contrast" is a scaling factor for the remaining AC ripple. I don't know what the actual qualitative values are, but I suppose you could measure them with an oscilliscope if you really needed to know. Brian /* This function says how long an exposure takes (in microseconds). It exists to aid synchronization. The Connectix docs don't include accurate formulas, but Connectix sent me the exposure formula. -- bls 02/21/97 */ unsigned long qc_exposure_time(const struct qcam *q) { int value; int shifter; unsigned long retval; value = (q->brightness & 0x0f) | 0x10; shifter = (q->brightness & 0xf0) >> 4; retval = ((unsigned long) value) << (1 + shifter); retval *= 4; retval /= 3; retval /= q->transfer_scale; retval += 150; return retval; }
a program that uses the Connectix black and white Quickcam as the detector for autoguiding an astronomical telescope.http://www.ameritech.net/users/mniemi000/auto.html
ash's Cheap-O Astrocam Page II http://members.cox.net/roeckelein_aap/quickcam.html heavily modified QuickCam: peltier coolers, metal case, etc.
Commercial CCD chips; info on how to assemble them and getting single images. see webcam.html for info on putting those images online.
".BIF - The Best Imaging Format" is an image file format optimized for astronomical images ... yet can be viewed with standard TIFF viewers. http://astro.martianbachelor.com/BIF/ "BIF's basic compression strategy is to use Run Length Encoding (RLE) for the upper byte of an image's data and LZW (Lempel-Ziv & Welch) compression for the lower byte. This assumes 2-byte (16-bit) image data." [FIXME: move to data_compression.html#file_formats ] [other ideas: would it help to ... differencing sounds like a good thing to try out ... convert to gray code before compression (should have no effect on hi byte, but might give better compression on lo byte) ... split into 8 bit "upper image" and 8 bit "lower image"; use lossless .png compression on each one ... ... fractal compression with blurring instead of dilation ... ]
From: sales at pixelcam.com To: d_cary@my-deja.com Subject: (O) RE: Need 'raw' RGB "Bayer" color test image files Date: Tue, 22 Jun 1999 17:48:32 -0700 ... Most CCDs for digital cameras use RGRGRG GBGBGB as their RGB color filter pattern. ... > Rudi Wiedemann > PixelCam, Inc.
From: "Phil Humphreys" <Phil at humphreysastro.freeserve.co.uk> Subject: Re: cookbook ccd power supply question Date: 28 Nov 1999 00:00:00 GMT Newsgroups: sci.astro.ccd-imaging ... Since you are building a CB245 you probably should subscribe to the CCD list at WWA.COM . This is mainly composed of cookbook builders. Send a message to LISTSERV@LISTSERV.WWA.COM In the body of the message place SUBSCRIBE CCD Scott Hahn That should get you connected. ...
Path: hermes.rdrop.com!user From: d.cary at ieee.org (David A. Cary) Newsgroups: comp.robotics.misc Subject: CCD with digital output Date: Sun, 24 Nov 1996 14:15:50 -0700 Organization: rarely. ... I always thought vision would be way too expensive for my robot until I read the comp.robotics.* FAQ. It mentioned a "$10... commercially available image sensor... built-in A/D converter". ... -- David Cary Future Technology, PCMCIA FAQ. Path: hermes.rdrop.com!user From: d.cary at ieee.org (David A. Cary) Newsgroups: comp.robotics.misc Subject: Re: Direct access to CCD camera. Date: Fri, 29 Nov 1996 17:59:51 -0700 Organization: rarely. Lines: 42 Yes, you're right, the accumulated charge is an analog level, and it must be digitized. However, I want to sample each pixel exactly once with my ADC (why would I want anything different ?), and it's not obvious to me how to get my ADC properly synchronized with the pixel outputs. It seems pretty silly to convert the 300x400 (or whatever) array of pixels into a continuous TV-style analog "composite video output" signal, then convert *back* into 299x401 array in my CPU. The comp.robotics.* FAQ. It mentioned a "$10... commercially available image sensor... built-in A/D converter". How convenient, a built-in A/D converter, I can just directly wire it to the digital lines of my CPU. Unfortunately, when I called them up the salesman claimed they didn't sell that chip any more. RWoodward at gnn.com (Ron Woodward) wrote: +In article <329ed78b.363673244@138.77.1.23> Paul Hannah wrote: +>Has anyone tried to access the CD array directly, instead of running +>through a video format of some kind then back to a bitmap. + +A CCD outputs a stream of voltage/current levels representing the amount +of charge accumulated in each pixel well. Because this comes out in a serial +stream it must be digitized to be put into a bit map. Essentially the only way +you can get it out of the CCD is in some form of analog video format without +the sync signals. ... -- David Cary Future Technology, PCMCIA FAQ. Date: Thu, 22 Sep 1994 07:23:00 -0800 From: "Steve Meaders" <omnix at infoserv.com> X-Mailer: Infoserv Connection's WinMail Subject: Re: New CCD Available Newsgroups: comp.robotics Lines: 38 ... >: >The new Digital Video Camera Chip, VVL1070, is ideal for all kinds of >: >low-cost computer video imaging applications, such as robotics, pattern >: >recognition, highway monitoring of traffic flow, weather conditions and >: >consumer applications such as computer snapshots and video telephones. > >: The designer of the 1070 sensor is VVL. VVL specializes in CMOS sensor >: technology rather than CCD. Other products from VVL include the Peach >: camera (CCIR CMOS camera) and the imputer (CMOS sensor with built in >: image processor). > >: The $10 price is applicable to volume OEM orders. End users should >: be prepared for higher prices for small quantity orders. If interested, >: I can provide VVL's phone, email, etc. > ... Iain Kyle VLSI Vision, Ltd. Aviation House, 31 Pinkhill Edinburgh EH12 8BD Scotland tel: 031-539 7111 fax: 031-539 7140 email: ifk at vvl.co.uk Hope this is of some help.
Toshiba linear CCDs: 128 pixels: $50 each (in 100s) -- _EDN_ 1995 Oct 12 "CCDs" article by John Gallant Panasonic MN3776RE ... 640 x 480 square pixels ... expected price $30 in "production quantities" by 1996. -- _EDN_ 1995 Oct 12 "CCDs" article by John Gallant Sony Semiconductor Co of America ... 1/5 inch color and black-and-white CCDs ... 362 x 492 pixels, and come in a 14-pin DIP. ... $19 each (in 10,000s). -- _EDN_ 1995 Oct 12 "CCDs" article by John Gallant From: bales at athena.mit.edu (James W Bales) Newsgroups: sci.electronics.components,sci.electronics.design Subject: Re: CCD datasheets and ordering information Date: 4 Jan 1996 14:23:31 GMT Organization: Massachusetts Institute of Technology Lines: 16 In article <4cfjfv$nlo@tigger.cc.uic.edu> blin1 at icarus.cc.uic.edu (Bor-tyng Lin) writes: > Where can I get the data sheets for CCD chips? There is a TI data book - "Area Array Image Sensor Products." Published in 1992, the back cover bears the code "SOCC030." Lots of data on 2-d CCD arrays. If you are just starting, consider the TC211, 192 x 165 pixel CCD in a 6-pin DIP. It's about the simplest CCD to operate. For the data book or chips, try a TI authorized distributer such as: Marshall Elecrtonics, 1-800-522-0084, Web page at http://www.marshall.com/ . Jim Bales MIT Sea Grant Underwater Vehicles Laboratory bales at mit.edu From: Mike Hoffberg <hoffberg at aps.anl.gov> Newsgroups: alt.image.medical, rec.photo.digital, sci.electronics.design, sci.engr.advanced-tv, sci.engr.television.advanced, sci.image.processing, sci.physics.accelerators, sci.techniques.xtallography Subject: high speed CCD camera Date: Wed, 22 May 1996 16:19:50 -0500 Organization: Argonne National Laboratory, Advanced Photon Source Lines: 23 We have developed a CCD camera system capable of over 100 frames/second with a 12 bit dynamic range (sigma 1.4). The software consists of a easy to use user interface, and allows the recording and playback of images, basic image visualization and real-time display of the image to the computer's screen. The camera is based upon the Thomson TH7895 CCD (512x512, 19 um pixel, 2 output). If anyone is interested in more information about this camera, please feel free to contact me: Michael Hoffberg Argonne National Laboratory tel 708-252-9173 fax 708-252-9303 hoffberg at aps.anl.gov -- Michael Hoffberg /.\ Argonne hoffberg at aps.anl.gov // \\ National mike at anl.gov //_O_\\ Lab Standard Disclaimer Applies /__| |__\
Image Processing Software, Digital Image Processing, Machine Vision.
David Cary has some of the fundamental tools (erode, dilate, Huffman compress, linear filtering via FFT) written in MatLab at http://rdrop.com/~cary/program/image_processing/ /* was http://oil.okstate.edu/~caryd/program/ */ .
[FIXME: want a easy-to-understand and mathematically correct definition of erode, dilate, open, close here, including gray-scale and binary versions; MatLab implementations can be found at http://rdrop.com/~cary/program/image_processing/ ].
Let b be a structuring element, typically a small picture of a circle or square or horizontal line or vertical line, typically centered on coordinate (0,0). ...
(related to human brain ?)
DSP Development Corporation http://www.dadisp.com/ "markets DADiSP, a popular graphical data analysis software program for scientists and engineers." FREE Student Edition Download.
"robot IR"
one method: * assume standard broad-band black-body-radiation curve * Measure with 2 sensors with different frequency response ... * By convolving the black-body-radiation curve with the 2 sensors response curve at various temperatures, construct ratio vs. temperature curve.
Other methods ?
Date: Tue, 11 Feb 97 14:41:24 PST From: Rodrigo Dib <1RFD1642 at ibm.MtSAC.edu> Organization: Mt. San Antonio College Subject: Re: I capture the IR signals thru my SB16, now what ? To: David Cary <cary at agora.rdrop.com> On Tue, 11 Feb 1997 08:23:03 -0700 you said: >Would you mind telling me how to use a SB16 to capture signals like this ? I used the "Line-in" of the SB16. I have an old IR Rx module so i connected its output to the "line-in" of the card. Then run Wave Studio which comes with the SB16 and recorded at highest sampling rate (44kHz), while i press the key in yhe remote control i wanted to see the signals. I had to press the key for just a fraction of a second so i just hit it a couple of times. I kept on pressing the first time and got the signal a bunch of times and thought it didn't work. In the bottom window of the Wave program i cut everything else but one burst of signal....as soon as i started to cut , it fit the signal in the window until i was left with just the pulses which were in the burst but i coudn't see because i was looking at them like in a "zoom-out" way, you can't see any details until you zoom in. Oh ..one more thing....while pressing the remote the speakers are going to make one hell of a noise!! you could mute them but you won't know if you got one burst or more. For other URLs .... no clue, thanks for yours.
circuit board inspection
DAV did a project with infrared circuit board inspection ... I thought I had collected more links on the subject.
infrared communication
see also infrared sensors #infrared
[Do I have more IrDA info in my "Organizations" file ?] [Do I have more IR protocol information in a "DAV's photons FAQ" file ?]
pulsed LED communication (IR serial communication).
-- Charles H. Small, 1998-02IEEE 802.11 ... allows an IR link ... at 850 to 950 nm. One point in IR data communication's favor is that it does not require FCC certification, which has often slowed the introduction of RF wireless products.
Unlike IrDA version 1.0's point-to-point nature, the 802.11 IR link uses diffuse-infrared transmission. So the receiver and the transmitter ... don't need a clear line-of-sight. ...
IEC 825 is the standards group that sets safety limits for IREDs. Currently, IREDs are grouped with lasers. The emission limits for IREDs are 325 mW/steradian to 500 mW/steradian ... work is under way to break out IREDs from lasers and put them with visible light devices. The allowed power levels for IREDs will them probably raise to 1000 mW/steradian. ...
... an increasing fraction of cellular phones, digital watches, palmtop computers, and notebook PCs will begin sporting IrDA ports. ...
very good, step-by-step instructions with lots of pictures ... so even someone relatively new to electronics should be able to build one up on a solderless breadboard. [FIXME: move to circuits ? Or move all IR circuits here, and just leave link from there to here ?] Compares using a 555 timer to a NAND oscillator. No significant difference for this application.
some 38 KHz IR communication circuits (38 KHz infrared transmitter). ... has both a 555 oscillator and a 74HC00 NAND oscillator; both drive the IR LED identically, and the NAND oscillator is cheaper, uses less power when "disabled" (but more while oscillating), ... 555 needs 4 external components (1 pot, 2 caps, 1 resistor ... ... NAND needs 6 external components (1 pot, 2 caps, 3 resistors) ... ... very detailed photographs
STV3012 infrared remote control transmitter for audio and video applicationshttp://us.st.com/
2.19 Where can I find information on IR standards? * http://falcon.arts.cornell.edu/~dnegro/IR/IR.html * http://www.hut.fi/~then/electronics.html#irremote * http://149.170.200.3/Physics/Acorn/About.html For a linux driver, see http://www.thp.uni-koeln.de/~rjkm/lirc/lirc.tar.gz * computer controlled IR remote control: http://www.geocities.com/CapeCanaveral/6552 * There's an application note entitled "IrDA-Compliant Transmitter/ Receiver," with lots of detail about the standard, at the SHARP web site, http://www.sharpmeg.com/datasheets/rf-ir/#0 and data sheets at http://www.sharpmeg.com/datasheets/rf-ir/#4 * http://www.irda.org/ The home of the IRDA standards organisation, somewhat messy, but they do provide the standards free for downloading. * ftp://ftp.armory.com./pub/user/rstevew/IR/ * http://www.misty.com/~don/irfilter.html
From: (Tom Maier) Newsgroups: comp.robotics.misc Subject: Re: Digital Remote Control Date: Thu, 20 Mar 1997 13:35:29 GMT Organization: MindSpring Enterprises, Inc. Sevcik wrote: Hi Sevcik, >> I've been able to get them to work up to 35 feet. You probably >> need to boost your transmission power. >> >Tom, you may be right. Do you remember what your LED current was ? About 200 milliamps peak, if I remember right. >> >> I get them to run at 1200 Baud with no problems. Are you >> using the type that Radio Shack is selling? >> >>I did not get mine from Radio Shack. I get 1200 baud, but can not >send a string of 9 bits of light on. The AGC reduces the gain till >the last bits drop off. So - I send every bit followed by it's >compliment. The line speed is 1200 baud, but half is redundant. >How did you avoid this problem ? I haven't noticed any of the dropping of the last bits. The modules I was using were from LITEON and I bought them from Digikey. It's solid as a rock at 1200 Baud. I did some designs for a company and they run it at 1200 Baud and have not reported any problems. Are you sure this is the AGC kicking in? Could it be that the output of your transmit LED is actually drooping during the transmission? One thing to check is to see if you are getting droop on your drive voltage to the LED during the last bits of the transmission. Connect a scope to the limiting resistor and see if this is happening. A large cap right across the transmit LED circuit from V+ to ground is also important. About 100 uF or greater. This helps to give better instantaneous current to the transmit circuit. This is especially important if you are using a battery power source. Maybe it's just a difference in the receiver module. Tom
IR components:
photon_sources (except for lasers laser.html )
[FIXME: consider moving this entire section to laser.html ?]
??? http://www.zetex.com/pdf/design/dn5.pdf http://www.zetex.com/design.shtm [FIXME: short description]
(uses #spread_spectrum )
Here I just talk about getting the (x,y,z,t) information. Technically interesting, but useless by itself. This is just the first step -- next you either find your location on a map 3d_design.html#maps or you use your location in building a new map.
This is mostly technical details for people who want to *build* a GPS receiver. See #gps_receivers if you want to *buy* a GPS receiver.
http://www.geek.com/news/geeknews/2001oct/gee20011001008112.htm talks a lot about the ethical implications of the FCC-mandated enhanced 911 location. Some more-or-less scenarios discussing what happens when this information falls into the wrong hands.
DAV: Because of my Brinist leanings, I think this is over-reacting. Yes, people could do bad things with that information -- if it's one-sided. However, if we can get enough information flowing the other way, then the threat can be negated. [FIXME: do I need a new ``societal implications of GPS'' section for stuff like this ?]
On Fri, 28 Feb 1997 11:07:23 +0000, Keith Stein <sthbrum at sthbrum.demon.co.uk> wrote: > > > Does anyone happen to know what the actual figure for >the total time dilation on the GPS satellites per day is ? It's around 38,000 nanoseconds if uncorrected. See: http://vishnu.nirvana.phys.psu.edu/mog/mog9/node9.html#SECTION00090000000000000000General relativity in the global positioning system http://vishnu.nirvana.phys.psu.edu/mog/mog9/node9.html
NAVSTAR GPS | GLONASS |
---|---|
Week of validity | Day of validity |
Identifier | Channel number |
Eccentricity | Eccentricity |
Inclination | Inclination |
Time of almanac | Equator time |
Health | Validity of almanac |
Right ascension (RA) | Equator longitude |
Rate of change of RA | --- |
Root of semimajor axis | Orbital period |
Argument of perigee | Argument of perigee |
Mean anomaly | --- |
--- | Luni-solar term |
Time offset | Time offset |
Frequency offset | --- |
reviews and general information on off-the-shelf GPS receivers.
[FIXME:]
See also A few GPS receivers that caught my eye .
see also 3d_design.html#maps for mapping software.
DAV thinks that a *real* compass you can actually see (really nice ones for $40) is still superior to the ``digital compass'' some GPS units have built-in (typically tacking $100 onto the price).We recommend always using a map and compass with a GPS receiver. Using a compass to navigate to a bearing is quicker and more accurate than relying on the GPS to lead you to your destination. Going anywhere in the backcountry without an adequate map is just stupid.
a few GPS receivers that caught my eye See also reviews and general information on off-the-shelf GPS receivers .
"GPS-MS1 - World's smallest GPS Receiver" http://www.u-blox.ch/gps-ms1/ "the form factor of a PLCC84 package (30mm x 30mm), the module provides complete GPS signal processing from antenna input to serial data output (NMEA or SiRF proprietary format) at a single operating voltage of 3.3 Volts. A second serial port accepts differential GPS data (RTCM). And, featuring the GRF1/LX RF front-end chip with integrated low-noise amplifier (LNA), the module connects seamlessly to low-cost passive antennas. General purpose I/Os and sufficient CPU power allow integration of additional customer specific functionality into the module. Based on Hitachi's SH-series RISC CPU ... low-power"
Search for GPS on cnet: http://cnet.search.com/search?timeout=3&q=gps and you'll see reviews of GPS modules for the Handspring Visor
http://www.visorcentral.com/content/Products/detail-41.htm lists receivers (and prices) from 4 different companies (this is the only place I've seen the ``WebGPS by Nexian'' )
apparently, no matter who you buy it from, as long as it's before 2002-12-31, Magellen will give a $50 rebate http://www.magellangps.com/en/store/promotions/visor.asp | http://www.marcosoft.com/
$142.78 ... $129.94 http://zdnetshopper.cnet.com/shopping/resellers/0-9491702-1411-5894190.html (the low price is from Amazon, but they only seem to have 1 used device, no new ones ...)
latest prices from several distributors: ($129.94 as of 2002-12) http://shopper.cnet.com/shopping/resellers/1,10231,0-9491702-311-5894190,00.html unfortunately, 2 of them seem to be offline, and http://www.cdw.com/shop/products/default.asp?edc=286017 tells me ``GPS Companion for the Handspring Visor Note: This product has been discontinued as of Tuesday, December 17, 2002.''
$153.08 Magellan GPS Companion Springboard Module http://www.expansys.com/product.asp?code=90270320&curr=USD 2002-10-12
automobile PDA Navigation System http://www.gpsw.co.uk/acatalog/GPS_Categories___PDA_Navigation_Systems___54.html lists the Magellan GPS Companion (for Handspring Visor) (using Springboard slot) and other GPS receivers for Palm, iPAQ, EPOC, and other PDAs. (using serial port or Compact Flast slot)
$229.99 (with Street Finder) $199.00 (with UbiGo)
$172.26 Nexian HandyGPS Pro Springboard Module http://www.expansys.com/product.asp?code=NEX_HGPS1&curr=USD 2002-10-12 (without either UbiGo or Street Finder ?)
??? is this the same thing ? http://www.visorvillage.com/hardware/Nexian-HandyGPS-2001-1-11-visor-.html claims that it's useless without UbiGo or other additional-cost software ... which no longer comes with the unit.
reviews of them:
GPS related software for Palm OS (including Handspring)
Photons travel at about 1 foot / ns =~= 1 mm / 3 ps. I used to think that would be far too difficult to measure accurately with reasonably-priced equipment, but I'm starting to hear rumors that clever tricks can do it with low-cost equipment. How ?
Can the same tricks be used to reduce the cost of transillumination unknowns_faq.html#transillumination equipment ? Or does transillumination equipment *already* use these tricks ?
Date: Thu, 19 Mar 1998 01:25:00 -0800 (PST) From: Robert Freitas ...It is fairly well-known that the mean free path of a photon in soft human tissue is 10-100 microns, "depending...". Thus in typical soft tissue, after ~150 microns some 99% of all photons have suffered at least one scattering event. By contrast, the characteristic range for absorption in typical human soft tissue is on the order of a few millimeters. So what you're seeing coming through your thumb is photons that have been scattered many many times, but have not yet been absorbed by the tissue. That's why you get a generalized diffuse glow rather than a sharp image showing internal structures like an x-ray. (Blue is preferentially absorbed, which is why the glow appears red.)
It may interest you to know that transillumination is being actively investigated as a way to look for subdermal tumor masses noninvasively. They use extremely fast flashes and various shutter-timing tricks to filter out the scattered photons, so that their sensors can preferentially accumulate those exceedingly few "ballistic photons" that have not yet been scattered, and which therefore still contain useful information about absorbers (e.g. bones, dense tumor masses, etc.) lying in the beam path.
There's tons of literature references on all this stuff -- it's really quite "old hat"!
...
Robert A. Freitas Jr.
...
Next, the light source is pulsed for a length of time equal to the round-trip time based for the maximum distance of interest and electronic shutter is gated with the same pulse. The accumulated charge would then be inversely related to distance and proportional to intensity. But the intensity contribution could be subtracted out since it is known from the reference frame.
... either direct control of the electronic shutter is needed, bypassing any synchronous logic, or a "sync" output from the camera must be available. Also note that the charge integration times involved -- 10s or 100s of ns -- are orders of magnitude smaller than those normally used on all but very specialized CCD cameras, even with a fast shutter. ...
Since the electronic shutter is held open precisely the same amount of time that the laser is pulsed, pixels corresponding to very close objects "catch" the entire pulse, while pixels corresponding to distant objects "catch" less of the pulse -- by the time the photons near the end of the pulse travel all the way out and back, the shutter has already closed. Pixels corresponding to very distant objects may miss the entire pulse -- so "inversely related to distance" is not exactly the right equation.
[FIXME: seems to be overlap / mixing between TOF and spread spectrum. Should I just mash into one big category ? How to properly seperate into 2 or more categories ?]
related local links:
Spread-Spectrum, patented in 1942 by Hedy Lamarr (actor) and George Anthiel (composer).
Also miscellaneous semi-related information on range finding, and linear-feedback shift-registers (LFSRs).
DAV: pulse communication and pulse radar have some similarities to spread spectrum: (a) both are wideband, and (b) both can be used to tell the distance between 2 communicators (or 1 station and a reflector).
DAV: While pulse radar is certainly easier to understand than fully-general spread-spectrum, it seems to have about the same (and in some cases worse) performance than other modulation schemes.
Qualcomm http://www.qualcomm.com/ sells a "Together, the MSM3000, an IF chip [the IFT3000 or IFR3000 IF chips] , and RF front end comprise all the system hardware required to implement an entire IS-95A or IS95B CDMA-compliant subscriber unit."
Binsfeld Engineering, Inc. http://www.binsfeld.com/ "Binsfeld Engineering specializes in data transmission from rotating sources. ... telemetry systems and rotary transmitters"
up: spread spectrum
Bluetooth radio modules ... ... use frequency hopping and "small" packets ... ... Forward Error Correction ... ... Bluetooth radios operate in the unlicensed ISM band at 2.4 GHz. ... shaped, binary FM modulation ... ... "The gross data rate is 1Mb/s. A Time-Division Duplex scheme is used for full-duplex transmission." The Bluetooth protocol ... handles both voice channels and data channels. ... Up to 3 time slots can be reserved for voice channels; Each voice channel supports 64 Kb/s synchronous (reserved time slots) in both directions. ... the remaining time slots are used for data channels. asymmetric data channels are asynchronous, support 721 Kb/s in either direction with 57.6 Kb/s in the return direction ... ... "Voice channels use the Continuous Variable Slope Delta Modulation (CVSD) voice coding scheme, and never retransmit voice packets. The CVSD method was chosen for its robustness in handling dropped and damaged voice samples. Rising interference levels are experienced as increased background noise: even at bit error rates up 4%, the CVSD coded voice is quite audible." ... The Bluetooth air interface ... air interface complies with the FCC rules for the ISM band at power levels up to 0dBm. Spectrum spreading has been added to facilitate optional operation at power levels up to 100 mW worldwide. Spectrum spreading is accomplished by frequency hopping in 79 hops displaced by 1 MHz, starting at 2.402 GHz and stopping at 2.480 GHz. Due to local regulations the bandwidth is reduced in Japan, France and Spain. This is handled by an internal software switch. The maximum frequency hopping rate is 1600 hops/s. ... TX/RX turnaround time: 220us More bluetooth info might be available at _Wireless Systems Design_ magazine http://www.wsdmag.com/
[This is an old backup; the latest version is at http://visual.wiki.taoriver.net/moin.cgi/BarCode ]
This is everything I know about bar codes. If you have any corrections or additions, please tell me. But what I'd *really* like is for someone else to maintain a FAQ.
Some related ideas at idea_space.html#improved_alphabets .
Many barcodes use some sort of ECC data_compression.html#ecc .
``All Bar Codes Are Not Created Equal'' http://www.hhp.com/hhp/experts/whitepapers.tpl compares and contrasts many 1D and 2D barcodes.
I am most interested in 2D barcodes:
I especially think their "2-Dimensional Bar Code Page" http://www.adams1.com/pub/russadam/stack.html | mirror http://www.aaa-barcode.com/2Dsymbols/2D.htm | translation into French http://users.skynet.be/dje/codebar/ is pretty cool. It lists a bunch of different 2D barcodes, with pointers to patents governing each one, and pointers to GPL source code for some public domain bar codes such as PDF417. It even lists 2 unusual "color codes": HueCode (proprietary) and Ultracode (public domain). [These use 3 human-distinguishable wavelengths; Chrmatic Alphabet idea_space.html#chromatic_alphabet is even more extreme by using 26 wavelengths ... ] I wish they had a prettier illustration of the "DataGlyph" (perhaps "serpentone").
MaxiCode FAQ http://www.maxicode.com/maxicde4.htm a public domain 2D symbology MaxiCode employs Reed-Solomon error correction.
http://www.symbol.com/ST000121.HTM#PDF417 free program here that creates PDF417 images "which then can be pasted into almost any Windows word processor"
PARC claims that DataGlyphs have
High data density (nearly twice the capacity of PDF417)
--
http://www.parc.com/research/asd/projects/dataglyphs/
I don't understand how that's possible.
I think the "serpentone" pattern is very aesthetically pleasing.
2d encoders: many of them use some sort of 2d barcode.
DAV: I've seen linear and rotary encoders;
uses a huge 2D barcode on the floor of the room as a 2D position encoder. [[ DAV thinks a ``better'' one could be made using concepts from spread spectrum... [FIXME: write up any clever ideas I have, and email jonh] [ You know that GPS uses DSS spread spectrum to get the precise distance from the satellite (oversimplifying). I think I understand how to build a N-bit LFSR in 1D to get a sequence of bits of length (2^N)-1, such that I only need to examine N consecutive bits to discover my exact location in the sequence. I suspect there exists a 2D generalization of this, so I only need to examine a square of NxN bits to discover my exact 2D location (and orientation ?). ] [ ]] See also "Hardware Hacker: New shaft encoder designs: Binary chain code secrets" by Don Lancaster, September, 1994 http://63.140.207.28/musev.pdf/hack80.pdf ] [stochastic labels seems like an interesting concept ... perhaps we wouldn't need *every* label to be unique. Say we had only 100 unique labels. If we're standing over label 99, and we just passed a 43, then we lookup in the big centralized database how many times 99 is right next to 43. If that only happens once, then we're at a globally unique position. Even if that happens 3 or 4 times, we've at least narrowed down our position to one of a few discrete locations. ] [slash codes ... position of crosses gives local position ... ... somehow encode global position in the 0/1 direction of crosses ?]
It looks like someone else already thought up the idea of using DataGlyph (slash codes) to give global position, and gave the idea a name: "address carpet" (I think I first read about this in GlyphChess:)
GlyphChess by Jeff Breidenbach http://www2.parc.com/asd/members/jbreiden/glyphchess.html
a 2D positioning barcode could be used for:
2003-06-07:DAV: I wonder what the "ideal best" barcode would look like. Some features we want:
Is that it ?
Some limitations that influence bar code design:
If the *only* limitation were the pixel size of the reader, then obviously the most dense "barcode" would be to reprenting the data with 1 "digit" per pixel. Since most readers have a square pixel array, this naturally leads to square data arrays. However, if the resolution is limited by other optical elements, or if we need to read the "barcode" at any angle, we can get slightly more dense barcodes by using a hexagonal array.
It seems clear to me that the ideal bar code covers the available area with small equal-sized, equal-spaced, regions (either square or hexagonal) and completely fills each region with a single color. I think these regions are commonly called "modules" in the barcode community, "tiles" in the tesselation community.
Obviously the minimum resolution is a scanner with 1 bit per pixel. The best method of getting 1 bit per pixel is to use black or white.
But I'm not convinced that this is necessarily required for the "ideal" barcode.
Grayscales could make each "digit" represent more than 1 bit. (How many grayscale levels would be the "best" number of levels ?)
Using human-readable color bands could easily boost density by another factor of 3 (3 bits of saturated color per pixel compared to black or white; 3 digits of full color per pixel compared to grayscale) -- but that's so human-centric.
Using 26 color bands (boosting density by another 8 times over human-readable colors) has been suggested idea_space.html#chromatic_alphabet .
If we discover the "ideal" single-wavelength (grayscale) barcode, would the "ideal" N-color barcode be simply stacking N grayscale barcodes, one at each of the N wavelengths ?
Sometimes we can't independently set each module's color.
Some inks don't stick well, causing "dotloss" -- all black dots less than a certain minimum size simply fall off the paper.
Sometimes heavy blurring makes tiny black or white areas less than some minimum size disappear. Some readers have enough resolution that it can tell whether an isolated white area is 5 modules wide or 6 modules wide, but an isolated white area 1 module wide in a black region looks just like a solid black region.
(The effect of "___", ink wicking a little bit outside the region it was printed, gives the same results as "blurring" if the printer compensates by reducing the printing region ....)
The simple solution: simply make the module size larger than this minimum size. The ideal solution (because it gives much higher density): keep the module size small and add restrictions of the form "any black module must be part of a group of at least N black modules".
The effect of blurring on the sensing of neighboring modules is even more obvious with grayscales. Sometimes we can distinguish between 64 grayscale levels when they're in large patches, but only 4 grayscale levels when it's in a tiny module sandwitched between a "white" module and a "black" module. It's not clear what the ideal solution is in this case.
One "nearly-ideal" solution in both the binary and grayscale case is clear though: Tile the data section of the barcode (not including the non-data features) into larger multi-module "symbols", then enumerate all the possible distinguishable symbols. When the symbol size is the size of the entire barcode, we get maximum information density; but it's much simpler to make them much smaller. (This is similar to Partial Response Maximum Likelihood http://en.wikipedia.org/wiki/PRML )
The maximum-density method is to add a few more error-detection bits to the message, then constantly scan all possible alignments until the decoded error-detection bits give such a small error that it's probably a barcode.
It takes far less computational ability, though, if we add something that makes it easier to align, such as one or more of
There doesn't seem to be any reason to me to build these features out of spots *smaller* than a module. (but some good reasons *not* to build them out of standard multi-module "symbols").
Simple solution: error-detection bits. CRC-32 only takes 32 bits, and seems entirely adequate. (Since even the smallest processors can handle the calculation these days, the only reason to choose something simpler is to let humans be able to calculate it quickly).
Some minor sorts of damage, such as coffee stains, can be compensated close to the module level *if* we use (binary) black or white modules, *and* we have some way of run-length-limiting the size of solid black or solid white areas. Some possibilities:
Ways of compensating when the damage is a change in intensity or contrast, and symbols are small enough that this is practically constant over any particular symbol:
With 3 modules/symbol (the minimum), this gives 1/3 bits/modules. With 4 modules/symbol, this gives 1/2 bits/modules. With 6 modules/symbol, this gives 2/3 bits/module. With 8 modules/symbol, this gives 3/4 bits/symbol. With 12 modules/symbol, this gives about 0.83 bits/symbol.
With N modules in a symbol, gives N-2 bits / symbol = (N-2)/N bits/module.
(example: slash codes: the center pixel is always black, other known areas are always white). (example: RS-232: the "start bit" is always a logical "1", the "stop bit / idle time" is always alogical "0").
(Symbols "close enough" to the fixed target/alignment features could use the known white and black areas of that as a reference. But local references are better -- using a nice big reference area at the perimeter of the barcode doesn't help much if the coffee stain isn't completely even inside the barcode).
Given other restrictions on a symbol, sometimes you don't give up much using a fixed reference. For example: RS-232 has a "start bit" () and "stop bit / idle time" () that can be used as level references, that are also used for alignment.
With 2 modules/symbol (the the minimum), this is Manchester coding: 1/2 bits/module. With 4 modules/symbol, this gives about 3/4 bits/module. With 6 modules/symbol, this gives about 5/6 bits/module. With 8 modules/symbol, this gives about 7/8 bits/modules. With 12 modules/symbol, this gives about 11/12 bits/module.
With N modules/symbol, there is 1 parity bit and N-1 data bits, giving (N-1)/N bits/module.
With 2 modules/symbol (the the minimum), this is Manchester coding: 1/2 bits/module. With 4 modules/symbol, this gives 6 possible symbols: about 0.65 bits/module. With 6 modules/symbol, this gives 20 possible symbols: about 0.72 bits/module. With 8 modules/symbol, this gives 70 possible symbols: about 0.77 bits/modules. With 12 modules/symbol, this gives 924 possible symbols: about 0.82 bits/module.
With N modules/symbol, this gives N! / ((N/2)! * 2) symbols: about [log2( N! / (((N/2)!)^2) ) / N] bits/module.
[FIXME: move to #math. Is this really monotonic ? If it's true that That can't be right. ]
This method gives the optimum threshold value,
I was surprised to find this also gives better density (bits/module) than the "fixed reference" for symbols of less than 12 modules. It always gives worse density than "parity", which is not surprising -- the symbols this method selects are a subset of the method the "parity" method selects.
(example: slash codes: In the 4 "arms" of the slash, 2 arms are always dark, 2 arms are always light).
Naturally, all these get closer to the maximum of 1 bit / module as symbols get larger and larger, but the sorts of damage that appear relatively constant over a very small symbol (and easily corrected) may change too much to be corrected over a large symbol. This seems to argue for making a "symbol" a compact region (Say we choose 12 modules/symbol in a tradoff between decoder complexity and density. Then we want to arrange those modules in a 3x4 array rather than a linear string).
At a global level, if we make the *entire* barcode very small, perhaps that will increase the chances that random damage to a physical object will leave the barcode area completely undamaged. However, it also increases the changes that random damage will completely obliterate the barcode.
If the barcode is very robust (such as slash codes), then it doesn't need a special area set aside for it -- it can be spread out over nearly all of the physical object. With a greater area to work with, this gives more room for error correction bits which makes error correction much simpler -- even though areas of the object that *must* be a certain color means that now even "perfectly printed" bar codes have a few "internal" correctable errors. -- Another benefit: If some areas *must* be a certain color (say, solid white areas and solid black areas when we're embedding the barcode into a color image), perhaps we can use that to help adjust the black/white contrast, letting us use more of the available pixels for bits rather than black/white reference). Perhaps we can use the outer edges of the object and areas that *must* be a certain color as target / alignment features.
When we have damage that completely "knocks out" a few entire "symbols" (holes punched through barcode or ink scratched off; doodles with opaque ink pen; bits of debris stuck to barcode, etc. ), it's clearly impossible to reconstruct the "intended" module values by looking at one symbol at a time. However, with proper barcode design, it *is* possible to reconstruct the original message by taking into account values across the entire barcode image. This is called ECC.
If there happens to be tiny super-bright noise particles on a barcode printed on grey paper ... some of the methods for picking a threshold get horribly confused. Others corrupt the entire symbol that they land in (but decode all other symbols fine), while a few methods only allow this to corrupt 1 or 2 bits of that symbol.
With "knock out" damage, we need ECC coding. At a higher level, error-correction bits. Some very sophisticated algorithms use "soft errors" -- they take information about which pixels are more likely to be erroneous (pixels that look least like solid black or solid white) to help recover the "intended" value of the entire symbol. Algorithms that do this for grayscales are even more sophisticated (although I've only seen them for 1D voltages from wireless or wired communications; I haven't seen anyone try to apply this to 2D grayscale barcodes yet).
More sophisticated forms of ECC are even better at correcting errors than simply "copy". data_compression.html#ecc
This leads to "stretched" codes: maximum density is to make the module a long, skinny box, long enough for the pin to make a mark, and wide enough to include all possible tiny variations in pen position (so the small jitter in pen position doesn't end up makeing a long, skinny mark just inside the edge of the *next* box). Or perhaps make the module *shorter* than this minumum length, and add the restriction that lines must continue for N modules (although spaces can be as short as 1 module). (run-length-limiting).
1 D barcodes and mag stripe.
http://www.mcgillismusic.com/demo_tape.htm [FIXME: add this link to #busines_considerations ?]For information on how to obtain a UPC symbol, contact the Uniform Code Council, Inc. 8163 Old Yankee Road, Suite J, Dayton, Ohio 45458 or call (513) 435-3870. This is a non-profit company which gives out UPC codes to everyone in the country. The fee is $300.
Has a good explanation of how to "round off" the measured bar code widths. David Cary's paraphrase:
Software debug loop: * Comment out all but one scan. * Step through the software recognition loop. * tweak code until it decodes that scan properly.
DAV: Is there any benefit to trying to use knowledge of the element widths of the preceding or following digits?
Symbology, Inc. http://www.symbology.com/ sells lots of bar code products.
stuff I thought particularly relevant:
Keep in mind also that barcode scanners read X dimensions of some barcode types better than others. For example, The Symbol Cobra Laser Scanner dependably and easily read our Code 39 barcode font printed at 6 points, but only reads the Code 128 barcode font printed at 8 points and above. This is most likely because Code 39 uses just 2 bar widths and Code 128 uses 4.
DAV: Somehow I thought that if the scanner had enough resolution to resolve the "module size", all linear barcodes (1D barcodes) were readable. Is there a good way of expressing this "problem" to help me in my quest for maximum barcode density ?
I'm not surprised that the 2D barcodes are denser than the 1D barcodes, but I was expecting them to be *far* denser. The PDF417 barcode looks like about 6 stacked 1D barcodes. It's about 3/4 the length of the Code 128 linear barcode. I expected it to be about 1/6 the length of the Code 128 linear barcode (maybe 1/3 the size, once you add in the extra guard rails and other EDAC).
[FIXME: since they do some 2D stuff, should I link to them under 2D also ?]
There are pages here on
See also international mailing addresses .
From: jhwhite at delphi.com Newsgroups: sci.electronics Subject: Re: Snail-mail envelope barcode format? Date: Sun, 26 Jun 94 15:20:49 -0500 William Lewis <wiml at netcom.com> writes: >I'm looking for information or pointers to sources of information on the >format of the bar codes that are found on US snail mail. (They're >along the bottom edge and consist of a number of short vertical >lines of varying height. They encode the full 9-digit ZIP code and >probably some other stuff as well, judging from their length.) I worked at a USPS remote bar-coding facility for over 3 years, and here is the information you want using an ASCII-representable format, where "/" represents the long bars, and "." represents the short bars. Every USPS bar-code starts with a long bar ("/") and ends with a long bar. Each decimal digit of the zip code is represented by 2 longs bars and 3 short bars ("."), as well as the check digit. Zip codes of 5 digits, 9 digits (ZIP+4), and 11 digits are commonly used. The first 3 digits of a zip are sufficient to determine the state or U.S. territory. The 4th and 5th digits determine the city or region. The +4 digits give the block number and postal carrier route identification for street addresses, but in some cases identify a P.O. box. The 10th and 11th digits identify the residence, which in many cases are just the last 2 digits of the house number. Zip codes are read from left to right (when the mail is oriented properly for automatic sorting). The five bars of each digit are assigned place values. The long bars of each 5-bar digit are given their place value and added, while the short bars are given a value of 0. The place values of the five bars as read from left to right are 7, 4, 2, 1, and 0. Because each digit must contain exactly 2 longs bars and 3 short bars, the digit 0 is a special case. What would add to 11 represents 0. Here are the single digit bar codes: //... 0 (Zero) ...// 1 (One) .././ 2 (Two) ..//. 3 (Three) ./../ 4 (Four) ././. 5 (Five) .//.. 6 (Six) /.../ 7 (Seven) /../. 8 (Eight) /./.. 9 (Nine) The check digit when summed with the 5, 9, or 11 digits of the zip code gives a multiple of 10. I will use the following real address as an example. It just happens to be the return address (no bar code to confuse the automatic sorter) on a piece of mail I received this week. CFS UNIT UNITED STATE POST OFFICE PO BOX 31769 LOUISVILLE KY 40231-9769 Starting with a long bar and following with the 5-bar representations of each digit, you should get the following before appending the check digit and closing bar: / ./../ //... .././ ..//. ...// /./.. /.../ .//.. /./.. 4 0 2 3 1 9 7 6 9 Adding the 9 digits together results in 41. Adding 9 will result in a multiple of 10, or 50. So the check digit is 9 ("/./.."). The bar codes are evenly spaced, and thus the actual bar code of 40231-9769 would be represented as: /./..///....././..//....///./../..././/.././.././../ 4 0 2 3 1 9 7 6 9 9 There are specifications on where to locate the bar code on the mailpiece, and what sizes the long and short bars should be. I do not remember what those are. Jeff White jhwhite at delphi.com
All the specifications can be found in http://pe.usps.gov/cpim/ftp/pubs/Pub25/Pub25.pdf
United States Postal Service http://www.usps.gov/
Proposal for Hospital Barcoding hbc.txt [FIXME: isn't this elsewhere on the net ?]
From: Bob Greene, 72233,1015 To: Ronald L. Wan, 75740,2230 Topic: Barcode Total Solution? Msg #197963, reply to #197956 Section: General [A] [0] Forum: IBM Applications Date: Wed, Mar 30, 1994, 12:13:22
I believe that we are using a scheme known as 3 of 9. As the Cheshire Cat said: "If you don't know where you're going, then it really doesn't matter which way you go." As long as you are not a POS then you will not have to concern yourself with the UPC format unless you want to. Choose one that suits your needs and standardize on it. I'm still looking for my old documentation to answer another users question. As soon as I find it, I'll plagiarize some text for you on the various formats available.
TMS uses a character field to store the bar code as ASCII text. When this field is printed for labels, a TSR monitors the print stream for the code and interprets it for printing. This way information is easily modifiable.
BTW: We found that our wands would not read the codes reliably unless they were 3 rows high or more. Keep this in mind when selecting labels. Otherwise, you'll end up with a few thousand that are unreadable like we did.
From: Bob Greene, 72233,1015 To: Ronald L. Wan, 75740,2230 Topic: Barcode Total Solution? Msg #198003, reply to #198001 Section: General [A] [0] Forum: IBM Applications Date: Thu, Mar 31, 1994, 11:06:10
Our problem with the height of bar codes was related to the arcing motion that most people have when they move their arm. You may be able to get by with less. It just makes missed scans much less frequent to have the codes larger. In our case, missed scans sometimes equated to no scans because they would not try again to correct it. It led to several extra inventories for the sanity of our numbers.
smart cards are an alternative to "dumb" cards that merely have a magnetic stripe or bar code, although there are many hybrid smart cards that also have magnetic stripe.
Biometric systems often (but not always) rely on some sort of "image" of a human, which is often stored in a Smart Card .
Articles:
The palm is scanned without direct contact by an infrared scanner. This hygienic procedure is of particular interest for use in hospitals or for frequently used devices such as ATMs. At a distance of only a few centimeters, i.e. without direct surface contact, near infrared rays scan the palm ... the veins are under the skin; ... authentication is still possible even if the hands are soiled.
There are pages here on
Companies:
I have heard of the trade journal
BioCard International 7500 Old Oak Blvd Cleveland OH 44130 (440) 243-1800
has information on hand print and related security devices,
Biometric fingerprint scanner http://webusers.anet-stl.com/~wrogers/biometrics/biodigest.html
A far-overrated little sliver of the electromagnetic spectrum, in my opinion. see for the rest of the spectrum.
Um... humans aren't machines, so technically this doesn't belong on my ``machine vision'' web page. Where else can I put this ?
contents:
Some humans have tetrachromate vision.
[FIXME: lots of other HDTV/television stuff scattered around in other files ...]
[FIXME: (Future History)] "David Layne, director of broadcast operations and engineering at KCNC-TV4, the CBS affiliate in Denver. ... For 30 years, CBS television has shot primetime shows on 16:9 (widescreen) high-definition 35mm film, "but until now we had no way to deliver the images to the public," ... ... The other networks will need to upconvert all their SDTV 4:3 content, a critical expense that CBS won't have to pay." -- Ken Freed, "Detailing Rocky Mountain Digital TV" article by Ken Freed in _TV Technology_ 1997 Nov. 6. "U.S. consumers spent $9.5 billion on television sets in 1996 alone, and more than a third of those were for big-screen home systems (30 inches or more). ... 40 percent of the big-screen TVs were purchased for households with annual income of less than $20,000. ... Layne thinks the FCC's date of 2002 for full HDTV implementation is unrealistic, and he expects market realities will push back the FCC's announced deadline of 2006 for turning off all NTSC broadcasts. "I think NTSC will be with us for another 10 to 15 years, at least." ... ... TV4 and the other Denver stations plan to begin HDTV operations by the fourth quarter of 1999."
"Quadrant Int. in Pennsylvania has a real-time video capture PC card available. Check out http://www.cris.com/~videoguy/" -- chrise at intelligraphics.com (Chris Edgington cedgingt at holli.com)
http://zippy.sonoma.edu/kendrick/webfx/ Enter the URL of some image already on the web, the choose one of many special effects ... pointillism ... ... some create animated gifs that move -- swirl and wavey.
Pattern Recognition Letters http://www.elsevier.nl/locate/patrec
University of Wisconsin - Madison Computer Vision Group http://www.cs.wisc.edu/computer-vision/
The Oklahoma Imaging Laboratory http://oil.okstate.edu/~caryd/ 405 Engineering South Stillwater OK 74078-5032
Opto Technology sells ultra-high-intensity LEDs, including (prices from EEPN 1998 July p. 17):
National Instruments
http://www.natinst.com/
sells the
$2495 IMAQ Vision ActiveX Controls
library of image processing and machine vision functions.
(1998-07)
Intel's PC Imaging Web site http://intel.com/imaging/
Toshiba Imaging Systems http://www.toshiba.com/taisisd/
Video/Imaging Image Processing Chips http://www.nasatech.com/Advertisers/hvideo.html
the Speech, Vision and Robotics Group at the Cambridge University Engineering Department http://svr-www.eng.cam.ac.uk/reports/index-full.html .
the Computer Vision Homepage http://www.cs.cmu.edu/afs/cs/project/cil/www/vision.html has pointers to many test images: color stereo pairs, fingerprints, human faces, real and synthetic motion analysis sequences, ...
Yale University Artificial Intelligence Group http://www.cs.yale.edu/HTML/YALE/CS/AI/VisionRobotics/YaleAI.html has some interesting visual tracking projects.
Dixon, p. 210 "Two conditions must be satisfied for any filter to be "matched": 1. Its impulse response must be a time-reversed replica of the signal to which it is matched. 2. Any particular unit of information must be independent of that preceding and succeeding it. ... integrate-and-dump filters meet these conditions." DAC: this is the best possible matching for a *linear* filter; but are there better possibilities for nonlinear circuits ? "if the input is a square wave ... For any other signal shape, however (assuming the peak voltage is the same), the charge at the end of delta_t is lower" [DAV: so, I guess Dixon assumes the AGC functions on *peak* rather than *average*] DAC: cool concept -- a high-Q parallel-tuned circuit for RF integrate-and-dump at IF; rather than standard RC integrator at baseband. "Consider the RC integrator, with both its switches open. An impulse at its imput charges the capacitor to a level (I[peak]/dt)/C, which remains until dumped. This is exactly the same as the desired flat-topped square pulse. For the integrator that uses a high-Q tuned circuit, the input impulse causes the circuit to ring at the tuned frequency -- again forming the duplicate of a desired input signal.
ring laser gyroscopes
Marshall Electronics, Inc. Optical Systems Division http://www.mars-cam.com/ http://www.mars-cam.com/cmos-c.html $29 V-X009 single-chip NTSC(or PAL) color camera, 352x240 color pixels, better than 40 dB signal/noise ratio (price from p. 71 1997 Oct _Advanced Imaging_)
Wavelet Resources http://www.mathsoft.com/wavelets.html
What is morphometrics http://www.infoseek.com/Titles?qt=morphometrics&col=WW&sv=IS&lk=noframes ?
> **** New Machine Slashes Time, Cost To Make Unique Lenses - EurekAlert > http://www.scienceguide.com/News/News_Articles/5798Article_8.html > Engineers from the University of Rochester and seven corporate partners, > working with the U.S. Department of Defense, have developed the first > system to automate the manufacture of unusually shaped lenses known as > aspheres. Its developers say the machine is capable of producing these > aspheric lenses in minutes, not days, at a fraction of the current cost; > the savings should trickle down to products ranging from 35-millimeter > cameras and medical endoscopes to military-grade night-vision goggles.
search for TI CCD sensor chips http://www-search.ti.com/search97cgi/s97r_cgi? Action=FilterSearch& Filter=TI.filter.hts& SCOPE=Semiconductors& KEYWDS=CCD [this search tool has a broken parser that doesn't recognize ``;'', see html.html#semicolon ]
the Robot FAQ has a "Imaging for Robotics" section http://www.frc.ri.cmu.edu/robotics-faq/10.html#10.3 has lots of information on frame grabbers and other robotic imaging applications.
Johns Hopkins University School of Medicine Medical Imaging Laboratory http://www.mri.jhu.edu/ seems to be interested in "Computer Integrated Surgery" http://www.cs.jhu.edu/~qmc-ta/reserves.html .
Image Processing and Artificial Intelligence Research http://ece.clemson.edu/iaal/ at the Department of Electrical and Computer Engineering Clemson University
Image Processing within Hexagonal Grids http://www.hexnet.org/~hexnet/randd/hex1.htm
http://www.research.microsoft.com/vision/ has a freely available "Vision Software Development Kit" and describes some interesting research projects, includeing "Vision-Based User Interfaces allow computers to recognize people and interpret what they are doing, using fast algorithms for real-time detection and recognition of people and their gestures."
http://www.lirmm.fr/~dandrea/index_us.html ???
Cybernetic Vision in the WWW http://www.ifqsc.sc.usp.br/ifsc/ffi/grupos/instrum/visao/group/members/pinda/pinda1.htm#CYBW ???
-- Tristan Savatier (President, MpegTV LLC) MpegTV: http://www.mpegtv.com MPEG.ORG: http://www.mpeg.org
http://www.idsystems.com/ "automatic identification and data capture"
Robotic Vision Systems, Inc. http://www.acuityimaging.com/ machine vision and 2D symbologies
http://www.visionshape.com/bartop.html ???
Latest Yahoo listing of Computer Vision web sites http://dir.yahoo.com/Science/Computer_Science/Computer_Vision/
Why We See Three-Dimensional Objects: Another Approach http://www.ccs.neu.edu/home/feneric/msdsm.html "the human mind's tendency to simplify inputs and find patterns even where there are none" "the human mind has an in-built compression system of sorts; it will always store the data in a minimal fashion." "the program was converted to compile and build using standard X-Windows and GNU's gcc and could then be built and run on virtually any UNIX box."
Segmentation by a Scalespace Approach http://axon.physik.uni-bremen.de/~rdh/research/segmentation/ includes cool online demo: "Try it out: Online-Segmentation of your data!"
Texture Segmentation: An Introductory Primer http://robotics.stanford.edu/~ruzon/tex_seg/ by Mark A. Ruzon mentions "Steerable pyramids" "Steerable pyramids ... are called steerable because, by using only a small number of filters corresponding to a few directions, the output of a filter in any direction can easily be computed as a weighted sum of the filters that have already been calculated."
Advanced Imaging Solutions http://www.pixeldata.com/ segmentation, medical imaging (ultrasound), etc.
http://www.owlnet.rice.edu/~elec539/Projects97/ The WDEKnow project has Matlab code for image segmentation.
"low cost frame grabber" www.ellips.nl "machine vision hardware and software" http://www.datx.com ??? http://www.mutech.com ??? http://www.imaging.com/ ??? http://www.epixinc.com/epix
Learning and Vision Group Robotics and Machine Vision http://www.ai.mit.edu/projects/
Digital Cameras From: Jeff Burton <jeff at plugin.com> Newsgroups: rec.photo.digital Subject: Re: (semi-faq) for all digital cameras Date: 10 Oct 1995 05:29:49 GMT Organization: Plug-In Systems In article <45b76t$m8u@news.internetmci.com> Kirk Membry, kmembry at internetMCI.COM writes: >I'm going to attempt to compile a list of all digital cameras >(including models) and put them on a web page I'm creating. > >I've just thought of this now, so I don't have any info (yet) > >Please help me out on this. you might take a look at the digital camera guide at http://rainbow.rmii.com/~jburton/PlugInSystems/DigitalCameraGuide.hmtl ( maybe all you really need is just a link :) ---------------------------------------------------------------------- Jeff Burton + Photographic Solutions for + Plug-In Systems jeff at plugin.com + the Digital Age + Boulder, CO USA
The R Package: http://alize.ere.umontreal.ca/~casgrain/en/labo/R/v3/index.html a collection of small programs for Multidimensional analysis, spatial analysis.
ImagingConcepts http://silo.riv.csu.edu.au/~conrad/imc/ Imaging Concepts is an image processing package designed for use by Medical Imaging students, Remote Sensing Image Analysis students and anyone else that needs to manipulate and analyze images. Unsupervised Migrating Mean Classification public domain software.
1280x1024 square pixel CCD camera http://www.for-a.co.jp/
http://www.kineticimaging.com/ 1000fps 128x128 image acquisition
Visionary Solutions Inc. http://www.visionary-imaging.com/ "designs and builds customized state of the art digital video cameras and related video components"
MDDSP - The Multidimensional Signal Processing Research Group http://www-mddsp.enel.ucalgary.ca/
Inventions Ltd. http://www.inventions.u-net.com/ Imaging Software claims to have a "optimal palette reduction" algorithm better than any other way of reducing a true color image to a pallette image. (The 16-color demo is very impressive !) (I'm guessing that they are using the K-means "migrating means" clustering algorithm).
Vismod Tech Reports and Publications http://dbecker.www.media.mit.edu/cgi-bin/tr_pagemaker lots of papers related to machine vision, including "Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video" ftp://whitechapel.media.mit.edu/pub/tech-reports/TR-466-ABSTRACT.html
The latest threads on the sci.image.processing newsgroup http://www.dejanews.com/bg.xp?level=sci.image
A "Kohonen neural network", vector quantization, and k-means clustering are all different names for the same thing, according to the Neural Network FAQ ftp://ftp.sas.com/pub/neural/FAQ2.html#A_unsup .
"Media Cybernetics: The Imaging Experts" http://www.mediacybernetics.com/ develops image analysis software, sells "Image-Pro Plus(TM)".
Rockwell Semiconductor, personal imaging division, claims its CMOS imaging sensors (Active Pixel Sensor and Passive Pixel Sensor configurations) are less expensive to manufacture than CCDs, and can deliver comparable image quality (352x288 ... 640x480 ... and 960x720 sensors already ready) while using much less power (1998 March)
The Robotics Institute at Carnegie Mellon University has done some very interesting work and published many good papers. You might check out their web site for [information on machine CCD vision]
WINNOV http://www.winnov.com/ internet videophone cameras
http://pc45.informatik.unibw-muenchen.de/ some machine vision information, and C source code. (mostly in Deutsch).
The "SUSAN" principle http://www.fmrib.ox.ac.uk/~steve/susan/ algorithms to perform edge detection, corner detection and structure-preserving image noise reduction. This site includes papers, a patent, and C source code (reads and writes PGM format images).
http://www.cs.cmu.edu/~quake/triangle.voronoi.html Voronoï diagrams, Delaunay triangulations, 2D high-quality mesh generation, 3D mesh generation.
Wavelet Image Compression http://www.wired.com/wired/archive/3.05/geek.html quick news article on wavelets.
"Plotting and Scheming with Wavelets" http://www.spelman.edu/~colm/papers.html
_The World According to Wavelets: 2nd ed._ book by Barbara Burke Hubbard "A well written introductory book that explains the mathematics of wavelets" -- recc. Chuck Carlson <sanna at wco.com> To: Computing as Compression <casc at sanna.com> Date: Thu, 20 Aug 1998
http://www.a1.nl/phomepag/markerink/mainpage.htm lots of infrared photography information; the Infrared-Photography Mailinglist
Evolutionary Computation Research: A GA Toolbox for use with MATLAB http://www.shef.ac.uk/~gaipp/gatbx.html I hope to make a few image procressing tools available similar to the way this page makes some GA tools available.
http://www.intellicast.com/weather/pdx/sat/ has satellite images centered on Portland, OR; http://www.intellicast.com/weather/sfo/satloop/ has images centered on California; etc.
Dr. Jim Bezdek http://www.coginst.uwf.edu/~jbezdek/ "Current research topics: multiple prototype classifier designs, mixed fuzzy-possibilistic c-means clustering models, rule extraction with clustering, generalized nearest prototype classifier networks, fusion of heterogeneous fuzzy data, target recognition with LADAR data, mammographic image analysis, topics in cluster validity, robotic control models, fuzzy learning vector quantization, clustering with genetic algorithms, and acceleration of image processing algorithms."
fuzzy image processing (?) http://www.dbai.tuwien.ac.at/marchives/fuzzy-mail97/0626.html
The Robotics and Image Analysis (RIA) Laboratory at the University of West Florida http://www.cs.uwf.edu/~msutton/lab/
Wavelet Resources http://www.mathsoft.com/wavelets.html includes "Introductions to Wavelets", "General Theory", "Frame Decompositions", "Wavelets and General Signal Processing", "Wavelets and Image Processing", "Wavelets and Computer Graphics", "The FBI Wavelet Fingerprint Compression Standard", "Wavelets and Econometrics", "Wavelets and Fractals", "Hardware and Software Implementation of Wavelet Transforms", etc.
SkyTel http://www.skytel.com/ seem to have a lot of hack value in their products:
PennWell Media http://www.broadband-guide.com/unicompub.html has some related publications: _Wireless Integration_ _Laser Focus World_
Liquid Crystal Displays http://dvorak.mse.vt.edu/faculty/hendricks/mse4206/projects97/group06/intro.htm#home very good explaination. Lots of good graphics.
Optical Switch Corporation http://www.opticalswitch.com/ ???
"Bell Labs scientists create "T-Rays" to see inside objects" http://www.lucent.com/press/0595/950525.bla.html mirror: http://www.att.com/press/0595/950525.bla.html 1998 Fall: David Cary took "Ultrafast Optoelectronics" and a learned a little about terahertz radiation from Dr. Grischkowsky. Here's some more info. T-Ray Images http://www.att.com/press/0595/950525.trays.html . [FIXME: email link to Dr. Gr.:] "Lawbreakers?" article 2000-08-17 http://www.economist.com/displayStory.cfm?Story_ID=318546 "John Singleton and his team at the Clarendon Laboratory are ... to build a tabletop device which is expected to ... do two ... emit radiation across a vast range of the electromagnetic spectrum; and the intensity of some of this radiation will diminish with distance much more slowly than is usually the case. ... Sceptics say the so-called "polarisation synchrotron" cannot possibly work ... The theory behind the device is the brainchild of Houshang Ardavan, an astrophysicist at Cambridge University who dreamed it up in the early 1990s to explain the behaviour of pulsars. These are dense, spinning neutron stars ... Current artificial radiation sources suffer from a problem called the "terahertz gap". There are not many devices which emit radiation at several million million cycles per second, and those that do exist are unwieldy. A tabletop pulsar tuned to emit at these frequencies would thus be useful for all sorts of things that rely on terahertz radiation, from medical imaging to satellite-based detection of atmospheric pollutants. "
Don's light, lamp and strobe site http://www.misty.com/~don/ Blue LEDs, incandescent and fluorescent lighting theory, lasers, Xenon flash and strobe, Visible line spectra of various elements and a few common light sources.
Hyperception, Inc. http://www.hyperception.com/ "engineering software, hardware, and system solutions for DSP, Image Processing, and Virtual Instrumentation" some free demos available for download.
Data Translation http://www.datx.com/ "PC based data acquisition, imaging, and machine vision systems" "PCI frame grabbers" "PCI data acquisition hardware" "ISA data acquisition & DSP hardware" "PCMCIA data acquisition" "HP VEE visual programming language" (some of the hardware and software works with the Macintosh) nice app notes (Data Acquisition Tutorials)(Imaging/Machine Vision Tutorials) http://www.datx.com/news/appnotes/appnotes.html
Open Data Acquisition Association (ODAA). http://www.opendaq.org/ "A non-profit organization whose objective is to provide users of data acquisition systems with a universal, open standard that allows interoperability between PC-based data acquisition hardware and software solutions from multiple vendors."
ATS http://www.firlan.com/ sells FiRLAN "Fast Infrared LAN", a wireless network.
Weather and Satellite Images http://www.nexor.com/places/weather.html ???
Fire Finder, Infrared Heat Detectors http://baytechinc.com/optic.htm
Research in Image Processing and Computer Vision at UC Davis http://www.ece.ucdavis.edu/research/image.html
$ 2 000 PCI bus-mastering frame grabber Cognex Corp. http://www.cognex.com/
American Bright Optoelectronics http://www.americanbrightled.com/
Robots with a Vision http://www.circuitcellar.com/articles/misc/sergent-92.pdf "This robot, equipped with a color-conscious vision system, can chase down your wayward tennis balls." ???
Bill and Jacks Robot and Video Page http://www.angelfire.com/pa/videoandrobots/index.html robot plans, surplus $35 color CCD video cameras for sale, a few other cheap surplus electronics. ... (cool !)
Using a Kodak DC-120 for Astrophotography http://www.nezumi.demon.co.uk/astro/dc120/dc120.htm "the Kodak DC-120 is not ideal as an astronomical camera, but if you happen to have one there is a possiblity to take simple shots of constellation star fields and telescopic shots of the moon and planets."
Exploratory Computer Vision and Intelligent Robotics (at IBM) http://www.research.ibm.com/mathsci/cb/cb_visrob.htm
http://ourworld.compuserve.com/homepages/hpwidmer/ spectrum analyzer uses the Fast Fourier Transformation (FFT) can do stereo FFT spectrum using PC sound card or Wave file input; also has stereo wave output generator.
Toshiba http://www.toshiba.com/ Imaging Systems Division http://www.toshiba.com/taisisd/
"There are certain kinds of patterns the eye doesn't see," explains Chai Wah Wu (working with Charles Tresser) http://www.research.ibm.com/resources/magazine/1997/issue_4/print497.html
used in tomography http://mathworld.wolfram.com/Tomography.html
Date: Mon, 03 May 1999 08:00:05 -0500 To: ecenfaculty at master.ceat.okstate.edu, hzhongx at fajita.ecen.okstate.edu, bosworj at fajita.ecen.okstate.edu, d.cary at ieee.org, caryd at tacoma.ceatlabs.okstate.edu, markkar at fesi.com, raghuna at fajita.ecen.okstate.edu, dipti at fajita.ecen.okstate.edu, ajunaid at fajita.ecen.okstate.edu From: "Scott T. Acton" <sacton at master.ceat.okstate.edu> Mime-Version: 1.0 Hi Click on the address below to see a mosaic of OSU, produced by Hongyu Chi. This was part of his term project -- he utilized simulated annealing to optimize the mosaicking parameters. http://www.ionet.net/~chongyu_osu/panorama/index.html Scotthttp://www.ionet.net/~chongyu_osu/panorama/index.html .
figure; colormap(gray(32)); image; x = get(findobj(gca,'Type','image'),'CData'); image(x); axis off image;But who is he? :-) ... You might be interested in learning that there's more to this image "than meets the eye." (2 dogs, 3 guys, a girl, ...) > > 6 - Number 17 > > 11 - Pig with number 17 (the number is the same as in 6) > > 16 - Pig with number 17 (same as 11) Try displaying 6, 11, and 16 as the red, green, and blue planes of an RGB image.
Date: Wed, 23 Jun 1999 08:48:42 -0400 From: "Paul J. Pacheco" <pjp at mathworks.com> Add to Address Book Subject: Re: Q: How to reference matrix created by 'load'? Organization: The MathWorks, Inc. To: d_cary at my-deja.com Alternatively, you can use the code below which allows you to use any name for the variable stored in the MAT-file. FNAME = input('Enter name of file to be loaded: ','s'); struc = load(FNAME); f = fieldnames(struc); var = ['struc.',f{1}]; size(eval(var)) HTH -Paul ----- ... Paul Pacheco mailto:pjp@mathworks.com DSP Development tel: (508) 647-7520 The MathWorks, Inc. http://www.mathworks.com
Date: Fri, 05 Feb 1999 08:19:15 -0200 From: Andre Fernando da Silva <andrefer at decom.fee.unicamp.br> Add to Address Book Subject: Re: sourcecode in Mathematical Morphology operations To: d_cary at my-dejanews.com, ee6fp at bath.ac.uk d_cary at my-dejanews.com wrote: > I've put the source to the low-level morphology primitives I use most often > ("open", "close", "align") > online at > http://OIL.okstate.edu/~caryd/program/ > . > > Please tell me if you find any other sources of morphology source code. Hi, I've found a pretty good program/library about mm in C. A description can be viewed at http://hpux.ced.tudelft.nl/hppd/hpux/Maths/Misc/morph-4.2/man.html and the ftp site is at ftp://image.vuse.vanderbilt.edu/pub/ I've just modified it to compile under Borland C 5.02, but not tested it enough. André.http://hpux.ced.tudelft.nl/hppd/hpux/Maths/Misc/morph-4.2/man.html
From: "Peter Zechmeister" <zechm002 at gold.tc.umn.edu> Subject: Re: Linear CCD sensor devices Date: 12 Sep 1999 00:00:00 GMT Organization: University of Minnesota, Twin Cities Campus Newsgroups: sci.electronics.design I don't know if these would work for you, but I am cleaning out some old inventory, I have about 300 of the following part, all new, never used, gold pins, stored in environmental controlled storage. NEC uPD791D 4096-Bit CCD Image Sensor 24-pin ceramic DIP Date code around 1989 First 5, $10 each, after first five, $5 each additional. Add actual S/H/I. Data Sheets Available, e-mail me, will send one with every order . These were used in a optical scanner used to inspect PCB layers during manufacturing to locate open/shorts in the metalization. The scanner used 12 of these in a row to scan a 24 inch swath at 2000 pixels per inch, at a speed better than one linear inch per second. It worked VERY good. Have other ICs used in product too, email me if interested. -- Peter Zechmeister - info at linkautomation.com
``HyPerspectives To Find and Identify Camouflaged Objects For Air Force'' article by Bozeman - June 10, 2002 http://spacedaily.com/news/radar-02c.html ``"hyperspectral" data and radar images of the earth''
??? [FIXME: email this link to Joe Bosworth ?]
how Magnetic Resonance Works http://www4.ncsu.edu/eos/users/w/wes/homepage/MRI/mrilesso.html . Has a few Thermal IR photos.
MCM20014 : 640 x 480; 0 to 30 frames per second; 48x programmable variable gain; Monochrome Sensitivty: 3.0 V/Lux-sec
MCM20027 : 1280 x 1024; 0 to 10 frames per second; 20X programmable variable gain; Monochrome Sensitivty: 1.8 V/Lux-sec
both: on-chip 10-bit ADC (giving 10 bit digital output); Bayer-RGB color filter array; Digitally programmable via I2C interface;
Motorola suggests using the ``MPC823 : The PowerPC Mobile Computing Microprocessor'' http://e-www.motorola.com/webapp/sps/prod_cat/prod_summary.jsp?code=MPC823 as a ``image co-processor'' with these chips.
``We do have 50-plus open cases where we have had signals detected at ground zero since the attack,'' said Kark Rauscher, who is heading up a coalition of wireless companies helping look for survivors. ...
The ad-hoc ``wireless emergency response team,'' composed of technicians from some of the major telecommunications companies, has begun searching for activity on about 2,000 wireless phone numbers of people believed trapped in the wreckage.
As wreckage is cleared away, the team hopes its ``radio frequency sniffers'' will be able to pick up signals blocked by the tons of concrete and steel.
Wireless numbers for people missing after Tuesday's attack can be provided to BellSouth operators at 877 348-8579. The numbers were being passed along to rescue teams in New York.
[FIXME: link from molecule#microscopes ?]The region of the infrared spectrum which is of greatest interest to organic chemists is the wavelength range 2.5 to ~~ 15 micrometers (µ µ).
...
Molecular asymmetry is a requirement for excitation by infrared radiation and fully symmetric molecules do not display absorbances in this region unless asymmetric stretching or bending transitions are possible.
``this web site provides a Windows driver as well as a C/C++ software library at no charge. ... for IEEE-1394 digital cameras, which are very different from DV (digital video) cameras. Although both types of cameras communicate of the fast IEEE-1394 serial link, ... DV cameras compress image data in order to record longer movies onto a DV tape. Unfortunately, the image compression is lossy. Although the image compression is optimized for human vision, the image compression artifacts can pose serious problems for machine vision applications. In contrast, IEEE-1394 digital cameras do not compress the image data and provide the raw image data. In addition, IEEE-1394 digital cameras allow you to control most of their parameters. ''
... see serialportdocs.html#ieee1394
``Canon EOS D30 and Fujifilm FinePix S1 Pro: CMOS vs. CCD'' article by Les Freed 2001-06-09 http://www.extremetech.com/article/0,3396,s%253D1009%2526a%253D1010,00.asp shows ``The standard ISO digital camera resolution chart'' as photographed by several cameras
describes exactly how to build 2 antennas for about $13, designed for 2.41 GHz ( 802.11b ).I'm increasingly obsessed with pushing more bits further and faster for less cost (I believe the unofficial goal of our community wireless project is to provide infinite bandwidth everywhere for free. Of course, there are problems with approaching infinity, but it's still fun to try!) ...
Over a clear line of sight, with short antenna cable runs, a 12db to 12db can-to-can shot should be able to carry an 11Mbps link well over ten miles.
http://www.esa.int/export/esaSA/ESAI8MZ84UC_index_0.htmlOn 30 November, the first-ever transmission of an image by laser link from one satellite to another took place.
... 50 megabits per second. ...
ESA's Artemis spacecraft ... at an altitude of 31 000 km, and the CNES SPOT 4 satellite orbiting at 832 km.
A CCD That Can Stand Up To Cosmic Raysarticle (press release ?) from Berkeley, 2002-08-20
-- http://www.spacedaily.com/news/ccd-02a.html... SNAP, the proposed SuperNova/Acceleration Probe, ... its billion-pixel astronomical camera, the GigaCAM. Built from an array of revolutionary Berkeley Lab CCDs developed by Stephen Holland and his colleagues in the Lab's Engineering and Physics Divisions, the GigaCAM will be the largest and most sensitive astronomical CCD imager ever constructed.
Standard astronomical CCDs are fragile affairs, and their ability to obtain high-quality images degrades quickly in the hostile radiation environment of space -- one reason why astronauts have already replaced all of the Hubble Space Telescope's original imaging instruments.
...
At 300 microns (millionths of a meter) thick, the Berkeley Lab "high-resistivity, p-channel" CCDs are much more rugged than conventional astronomical CCDs measuring only a few tens of microns thick. ...
...
... Computer processing and other electronic tricks can compensate for cosmetic flaws like a few damaged ("hot") pixels, endemic to all CCDs, but dark current and charge transfer efficiency pose more serious challenges.
Dark current is electronic noise caused by thermal motion of the atoms that make up the chip; ... The Berkeley Lab CCD is much thicker than ordinary astronomical chips so there is more material in which dark current can be generated, but its high purity, negative "doping," and low operating temperature work to suppress dark current. ...
Typically the most serious radiation damage to CCDs is a steady reduction in charge transfer efficiency. ...
...
... Available studies indicate that the charge transfer efficiency of conventional CCDs falls off rapidly with increased radiation, while the Berkeley Lab CCDs are little affected even at very high doses.
... the Berkeley Lab CCD descends from a long line of detectors designed to withstand radiation from colliding beams of particles in giant research accelerators -- "much more hostile than outer space," Kolbe remarks.
"The high-resistivity p-channel CCDs exhibit extremely low dark current at the operating temperature," ... "... Their potential lifetime in space is measured in decades, not years."
...
-- http://denbeste.nu/cd_log_entries/2002/10/GSM3G.shtmlthe three classic stages of Not Invented Here syndrome:
- It's impossible.
- It's infeasible.
- Actually, we thought of it first.
...
... At the RF layer, CDMA was obviously drastically superior to any kind of TDMA. For one thing, in any cellular system which had three or more cells, CDMA could carry far more traffic within a given allocation of spectrum than any form of TDMA. (Depending on the physical circumstances, it's usually three times as much but it can be as much as five times.) For another, CDMA was designed from the very beginning to dynamically allocate spectrum.
...
In CDMA, the amount of bandwidth that a given phone uses changes 50 times per second, and can vary over a scale of 8:1. When I'm silent, I'm only use 1/8th of the peak bandwidth I use when I'm talking. (But I don't actually send full rate most of the time even when I'm speaking.) That's very useful for voice but it's essential for data which tends to be extremely bursty, ... So when higher data rates were desired, it was possible to augment the cell and create new cell phones which could transmit 56 kilobits per second using the same frequency as existing handsets.
[FIXME: read some more stuff here. Perhaps some of it could be applied to my idea of fractal segmentation of an image ...]Research (in the Signal Processing Division) is concerned with novel tools, such as neural networks, fractals, wavelet transformations, chaos theory, and higher order statistics, to address problems in the fields of defence electronics, seismic data analysis, biomedical engineering and video and image processing.
http://citeseer.nj.nec.com/constantinides92novel.html [ultrasonic ? 1d_design ? variable star / radio telescope ?]The detection of sinusoids in noise is a recurrent problem in signal processing. When no a priori knowledge of the possibly time varying sinusoidal parameters is available, an adaptive solution is preferred. ... It has found many application areas which include sonar, biomedical and speech ....
I suspect this is a *different* J. Chambers: John Chambers http://star.arm.ac.uk/~jec/ has lots of astronomy information [FIXME: astro ?]
[words]
with only 50 to 70 milliwatts of power, Time Domain's experimental PulsON chipset-powered systems have reached 40Mbps at ranges of up to 200 feet.
RealFaces ( http://www.realuser.com/ ) is actually surprisingly cool. ... the advantage that you *can't* write them down ...... using faces as passwords ... 5 rounds of selecting 1 out of 9 faces ...
As a tutorial in machine vision, I decided that I would build a robot that will push a red playground ball down a hallway, and hopefully into the professor's office.points to Improv - Image Processing for Robot Vision http://www.ee.uwa.edu.au/~braunl/improv/
Improv is a tool for basic real time image processing with low resolution, e.g. suitable for mobile robots. It has been developed for PCs with Linux operating system. Improv works with a number of inexpensive low-resolution digital cameras (no framegrabber required).available for free download. http://www.eisystems.be/astronomy/quickcam_uk.html
This page is the place to be for modifying b&w webcam to a cooled astro-cam
a Connectix B&W quickcam (grayscale) with a Texas Instruments TC255P CCD chiphttp://www.closingthegap.com/ [braille ?] Astrophotography with a QuickCam by Patrick Vanouplines http://www.vub.ac.be/STER/www.astro/AstroCCD/AstroQC.htm http://baby.indstate.edu/CU-SeeMe/devl_archives/nov_94/0032.html mirror http://baby.indstate.edu/CU-SeeMe/devl_archives/oct_94/0386.html apparently Thom Hogan was involved in the original Mac QuickCam development effort Astroimaging: How To Modify A QuickCam http://mywebpages.comcast.net/observatory/howto.htm [FIXME: !!!!!!!!!!!!!!!!!!!!!!!!!!! ]
discusses how CG (Computational Geometry) can be applied to
(medical imaging, microscopy, geology, and aerospace manufacturing are all heavy users of shape reconstruction) ... the idea of geometric hashing ...
... The affine hashing method [87] uses a redundant representation of a set of points in order to locate that point set under an affine transformation, in the presence of extraneous data points. ...
Some mesh generation goals vary with the application.
For example, long skinny elements, aligned with flow,
can be quite useful in computational fluid dynamics.
Moving features, such as shock fronts and vortices, require changing meshes.
...
Research and development groups in finite-element methods
tend to write their mesh generators.
There are a few public-domain codes
(e.g., PLTMG, written by Randy Bank, and GEOMPACK, written by Barry Joe),
and some commercially available code (e.g., ICEM CFD),
but all large manufacturing companies use their own codes.
There is no publicly available package
that is adequate for generating 3D meshes for computational fluid dynamics.
Aerospace engineering experts even go as far as
admitting the lack of adequate code for generating 2D meshes for viscous flows.
There lies an exciting window of opportunity ...
AUTOMATIC REMESHING:
A unified meshing system might be envisioned along the following lines:
first, the domain is meshed, based on the geometry alone;
then the differential equations are solved.
Finally, remeshing occurs, based on an automatic error analysis.
The last two steps can be repeated many times.
The advantage of such an approach is that
the final mesh is produced from an actual approximation of the solution and
not an educated guess.
The design, analysis, and control of micro-electromechanical systems (MEMS) are inherently geometric in nature.
... How does one express a complex shape? Or more precisely, how can a human define unambiguously a complex shape to an automaton? This is ``the user interface problem'' in CAD.
...
there are two ``input-and-interaction'' modalities for CAD systems:
- (1) programming interfaces, and
- (2) ``pick-and-click'' graphics user interfaces.
... parameter passing, conditionals, etc. Language interfaces are nicely suited to defining parametric families of parts, but lack the visual feedback that most engineers consider essential. Graphics user interfaces provide plenty of feedback, but almost no abstractive power (names, parameters, etc.); they deal in instances rather than generics. An important challenge is to devise a user interface with the visualization power of modern GUIs and some of the abstractive power of a programming system.
... The currently popular schemes for representing solids -- boundary representations and constructive solid geometry (CSG) -- appear to be incapable of supporting anisotropy; thus, new means must be found, since customizable anisotropy is one of the most attractive properties of layered manufacturing processes.
OPTIMAS 6.5 is 32-bit software for image analysis in Windows 95/98 and Windows NT 4.0. It offers powerful processing and measurement capabilities, output to files or Excel, scripting, programming, hardware compatibility, and monochrome or color images. It supports high bit depth, image sequences, complex regions of interest, advanced morphology, Visual Basic, networks and more.http://www.optimas.com/ DAV: how does this compare to MatLab and other image processing toolboxes ?
bebner at earthlink.net on 09/05/2001 03:55:06 AM To: dcary Subject: New technology solves microelectronic contact stress problems! Dear Engineer: I am writing to inform you of a new product of ours - a sensor film that, by changing color, reveals stress distribution and magnitude between any two surfaces that come in contact. The intensity of the resultant color is quantifiable and enables you to determine precisely what the PSI is at any point on the contacting surfaces. This product is extremely valuable for assessing PCB lamination roller planarity, revealing inconsistencies of contact between a heat sink and heat source and determining uniformity problems during thermocompression in the TAB process (and in any kind of bonding process.) If you would like a FREE product sample, please describe your application and select one range from the six listed below. Please choose only one: 2 - 20 PSI (.01MPa - .2MPa) (The 2 - 20 PSI range shows relative distribution only) 28 - 85 PSI (.2MPa - .6MPa) 70 - 350 PSI (.5MPa - 2.4MPa) 350 - 1400 PSI (2.4MPa - 9.7MPa) 1400 - 7100 PSI (9.7MPa - 49MPa) 7100 - 19,000 PSI (49MPa - 131MPa) It is imperative that you include your contact information (address, phone, fax, etc.) Without it we cannot process your request. Thank you for your time. Bill Ebner Sensor Products Inc. 188 Rt. 10 Ste. 307 E. Hanover, NJ 07936 USA http://www.fuji-prescale.com ph 973.560.9092 fx 973.884.1699
video processing product suppliers. You will find suppliers of cameras, transmitters, receivers, monitors, video/audio accessories, frame grabbers, digitizers and software. http://www.robotics.com/video.html
Davin Milun ... dissertation in Image Processing/Computer Vision/Pattern Recognition. In particular, my research involved Markov Random Fields, and my dissertation is titled "Generating Markov Random Field Image Analysis Systems from Examples."http://www.cse.buffalo.edu/~milun/ [FIXME: to read]
Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com Date: Thu, 14 May 1998 18:48:35 -0400 (EDT) From: Patrick <patrick at cs.virginia.edu> To: Giulio Balestreri <gippo at dsi.uniroma1.it> cc: quickcam-drivers at crynwr.com Subject: Re: linux (and color quickcams) MIME-Version: 1.0 > I am trying to use our connectix > color quickcam (we have two qcams) > under linux. Are they color quickcams, color quickcam 2s, or quickcam VCs? If they're VCs (a black ball instead of a white one), they won't work. > So I have found in linux/apps/video > somewere on the web, the tar compressed > quickcam07c-5 groups of files I recall having some problems getting that to recognize my camera also. Use a more up-to-date program like cqcam, qcam-0.9*, or qcread. You can find a list of available software -- these packages and more -- at http://www.cs.virginia.edu/~patrick/quickcam/ > a question: I would like to have a command > that save the image into a gif or jpg file > is it possible to do ? All of the packages I have used generate a PPM as output. This can be converted to any format using cjpeg (to make JPEGs) or convert (just about anything else). Convert comes from the ImageMagick package. JPEGs are better for Quickcam pictures, and cjpeg is quite efficient. Find it at: ftp://ftp.uu.net/graphics/jpeg --Patrick Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com Date: Thu, 11 Jun 1998 14:20:56 -0400 (EDT) From: Patrick <patrick at cs.virginia.edu> To: "J. Neil Doane" <root at yeah.indstate.edu> cc: quickcam-drivers at crynwr.com Subject: Re: WebCam? MIME-Version: 1.0 > This is possible a FAQ for this list, but at the risk of repeating it > again: Does anyone have experience setting up a WebCam under linux? Yeah, some of us here have done that. ;) Here's a mini HOWTO: http://www.dkfz-heidelberg.de/Macromol/wedemann/mini-HOWTO-cqcam.html Here's a web page with a bunch of software: http://www.cs.virginia.edu/~patrick/quickcam/ All this information is for Quickcams. Other cameras are not as well supported under Linux. --Patrick Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com TO: quickcam-drivers at crynwr.com Subject: 'Digital Radar' for linux Date: Fri, 19 Jun 1998 14:23:35 EST From: Anthony Forma <forma at ecn.purdue.edu> I wrote (some might say 'attempted to write') a little 'digital radar' type of motion-detector program for the quickcam and Linux. You can get it at http://wcic.cioe.com/~forma/comp/ No guarantees, but it seems to work very well with my setup. Thought you might be interested. Anthony Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com From: Nils Ulltveit-Moe <nu-moe at online.no> Date: Wed, 1 Jul 1998 23:21:02 +0200 (DST) To: Jouni Erola <jouni.erola at autotoivonen.fi> Cc: quickcam-drivers at crynwr.com Subject: QuickCam for Dos Why not Linux??? You only need -one- floppy to run it (plus cqcam rpm): http://www.stuttgart.netsurf.de/~khk/lods.html Best regards, Nils Ulltveit-Moe Jouni Erola writes: > > This may be a dum question, but I haven't found what I'm looking for .. so .. > > Has anyone got a program that catches up a picture under plain DOS? I would > use old 386sx to a webcam purpose and don't want to install linux for that > machine. > > > Thanks, > > Jouni Erola > Finland, Europe > > > Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com From: "Dale Whitfield" <dale at elad.compulink.co.uk> To: "quickcam-drivers at crynwr.com" <quickcam-drivers at crynwr.com> Date: Wed, 08 Jul 1998 14:24:38 +0000 Reply-To: "Dale Whitfield" <dale at elad.compulink.co.uk> Priority: Normal MIME-Version: 1.0 Subject: Re: Quickcam and Java 1.1 My solution was to write a socket interface to my OS/2 quickcam support. This allows a Java app to use socket calls to control the camera and grab frames (in raw, BMP and JPEG formats) over a stream. Can even be done remotely if you're on a network.... Possibilites are endless. Cheers, Dale. QV2 The QuickCam Viewer for OS/2 is downloadable from (UK and US respectively): http://www.cix.co.uk/~elad/qv2.htm http://www.2d3d.com Mailing-List: contact quickcam-drivers-help at crynwr.com; run by ezmlm Delivered-To: mailing list quickcam-drivers at crynwr.com Date: Thu, 09 Jul 1998 12:16:32 -0400 From: Eric Brager <eric at network.uhmc.sunysb.edu> MIME-Version: 1.0 To: quickcam-drivers at crynwr.com Subject: timedate stamp Check out the tools that live in the netpbm package for adding time/date stamps to images. Its somewhere in: ftp://sunsite.unc.edu/pub/Linux/apps/graphics/convert/ -Eric -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Eric Brager, UNIX Network Administrator University Hospital and Medical Center at Stony Brook Networking and Integration Department Email: <eric at network.uhmc.sunysb.edu>
[fractals][medical][to read]In theory, box counting fractal analysis is an elegant method for finding the fractal dimension of geometrical contours, ... We have written an algorithm that addresses many of the problems traditionally limiting box counting fractal analysis. Our algorithm has a high accuracy rate with theoretical fractals, and has proven useful for the analysis of ... microglial morphology. The algorithm is not limited to microglia; it is a generally useful morphological analysis tool providing other information (e.g., circularity and cell span) that can be used to characterize many biological and other forms.
[FIXME: read more about gated viewing -- link to "ballistic photon"]"fast cameras are needed to capture fast phenomena -- capturing explosions, ballistic tests, plasma studies, and lightning bolts.
There are also many other interesting applications not directly related fast events, such as very sensitive remote analysis using laser induced fluorescence (LIF) -- images are captured some time after the object was illuminated by the short laser pulse. Another interesting application is that of looking through muddy water or just through heavy snow at night when headlights do not help ( http://www.google.com/search?hl=en&q=gated+viewing ); also, Flash LADAR technology -- fast laser illumination and gated image capturing is used to capture 3-d information.
It is possible, for example, to put on the same rotating platform a laser, gated camera and second non-gated color one. Such a system is suitable for automatically building a 3-d model of the scene around the camera -- the first camera is used to get the 3-d data, and the second, apply textures.
This page Started: 1998-02-21 and has backlinks
David Cary
Return to index // end http://david.carybros.com/html/machine_vision.html http://rdrop.com/~cary/html/machine_vision.html