updated 2009-07-25.
Contents:
see also local pages
IEEE P1451.0 (TM), "Standard for a Smart Transducer Interface for Sensors and Actuators; Common Functions, Communication Protocols, and Transducer Electronic Data Sheet (TEDS) Formats," will be a new standard containing a common set of functions, communications protocols and TEDS formats for various physical communications media. It will aid interoperability among IEEE 1451 (TM) standards and simplify the creation of standards for different physical layers.
If you design a new serial protocol, here are some ideas and suggestions you may want to consider:
When a receiver is first turned on, how does it know where one transmitted bit ends and the next starts ? Also, what is the bit rate ? Pretty simple for clean RS-485 (and most other hard-wired communications hardware) where you can just watch the edges; more difficult when there is noise (as in nearly all wireless hardware).
Often dedicated hardware synchronizes things to the nearest byte; then higher-level protocols in software deal with frame synchronization.
Say the device that is *supposed* to talk first starts talking, but the other device doesn't get turned on until the middle of a "sentence". How does the receiver know to throw away the (incomprehensible) rest of the "sentence", and re-synchronize at the start of a new sentence ? (Same thing applies to buffer overrun).
Typically one re-synchronizes by transmitting a unique byte sequence (a "header" or "start-of-message") that will not (or "probably will not") be embedded in the middle of a sentence.
Including the newline and/or carriage return ( suggested by Dmitri Katchalov Date: 27 Jan 2000 ) in this byte sequence helps make it more human-readable.
On Newsgroups: comp.arch.embedded , sometime before 1999-01-29, Edward K asked:
>I need a small (low C code size), simple protocol to drive a proprietary >network based on RS-485. I dont want to get into TCP/IP for obvious reasons. >The clients on the RS485 bus would be based on a small 8 bit micro so speed >and efficiency are important.
In response, Robert Reimiller 1999-01-29 suggested:
I have a small network (about a dozen nodes right now) using PIC processors. Although it doesn't use RS-485 voltage levels, the operation is about the same. It uses a very simple message packet :
Length - 1 byte Source ID - 1 byte Destination ID - 1 byte Message Type - 1 byte <variable length message> Additive Checksum - 1 byte Exclusive-OR Checksum - 1 byteThis is a binary protocol so I used SLIP framing characters to determine the packet boundaries, very simple to implement.
A practical next step would be to use UDP/IP Packets, that way you could interface the system to router that supports SLIP.
Bob
In response, Ian Wilson 1999-01-29 commented:
Length as the first byte can cause all sort of mess in a noisy environment - consider what would happen to the when it receives noise. It would read the noise as a length and go off happily looking for the next x bytes before trying to crack the packet and seeing a back checksum. If the channel is very noisy so that you are getting continuous transitions at you receiver you can completely lock out you network. A better header for a noisy channel is a header sequence that is fixed. I does add overhead but will minimise the amount of false packets.
We use a time passed poll response system quite often. The master polls a slave and the waits for a response - if the reply has not started in a few milli-seconds the master assumes no reply is going to happen. Since replies are longer than a few ms this increases throughput on networks with lots of devices (128 or 250) where the most common reply is "nothing to report".
Also consider adding a sequence number to the packet. If a slave sees the same sequence number it saw on the last poll it knows the master did not receive its last reply - saves having a seperate ACK reply from the master to the slave. the master holds a differenet seq number for each slave and only increments it on a successfully decoded reply. Keep seq number = 0 as a special case that forces the slave to reset its seq number. Let the seq number increment wrap around from 255 to 1.
Ian Wilson -------------------------------- Considered Solutions considered@ozemail.no.spam.com.au (do the no spam thing to make a valid email address)
Mark Odell 1999-01-29 commented:
Xmodem sends the length byte followed by the ~length byte. Then a checksum at the end. It was unlikely that the length and ~length would screw up randomly to still work. Besides, a practical max. packet size can help prevent you from going too far off in the weeds.
My RS-485 protocol ran on 8051's with a single master that polled all 20 slave devices for status. I think the packet was destAddr, len, ~len, payload[len], 8-bit checksum. Worked okay for me over about a mile of total network distance.
- Mark
Subject: Re: Embedded RS-485 Protocol Needed! Date: 31 Jan 1999 00:00:00 GMT From: Ian Wilson Newsgroups: comp.arch.embedded <notjimbob at worldnet.att.net> (James Meyer) wrote: >On Fri, 29 Jan 1999 04:15:07 GMT, leafs at ozemail.com.au (Ian Wilson) >wrote: >>back checksum. If the channel is very noisy so that you are getting >>continuous transitions at you receiver you can completely lock out you >>network. > One would think that if the channel were getting continuous >transitions because of noise, that the channel would be completely >useless and *no* combination of packet parameters would be any better >than any other. No - this is not the case. Many RF receivers do not squelch (turn off) their outputs when the is no transmission - as the carrier detect circuit is often more complex and less reliable and more power hungry than the receiver (in micopower applications). So you have continuous streams of noise induced transitions. When a transmission arrives it is much higher than the noise level and so the receiver behaves correctly. What you don't want is the noise induced stuff to bugger up your network throughput. This sort of thing can happen even on wired networks - where you have heavy machines turning on and off or thyristor circuits nearby. Maybe not to the same degree but you can get false transitions. If a packet is not designed well then the network can fail. This is where: 1) fixed header sequences, 2) timeouts, 3) error detection/correction codes, 4) Ack/nack protocols make the difference between a system that works in good conditions only vs one that is truely robust. In designing for volume making a system robust at the expense of a few weeks of software and protocol design is definitely the way to go. Ian Wilson -------------------------------- Considered Solutions considered@ozemail.no.spam.com.au (do the no spam thing to make a valid email address)
Subject: Re: Embedded RS-485 Protocol Needed! Date: 01 Feb 1999 00:00:00 GMT From: Bill Gatliff <gatliff at haulpak.com> Organization: Komatsu Mining Systems Newsgroups: comp.arch.embedded...
SAE has a standard called J1708. Basically, all it says is that an idle period of ten or more bit times between bytes marks the end/begin of a packet, packets are at most N bytes in length, and that each packet starts with a destination byte and ends with a checksum. For the most part, you can put whatever you want inside.
In a noisy environment or one where collisions can occur (like RS485), you *do not* want to depend on receiving a length byte. Idle timing is the best way to go.
Likewise, you don't want to deal with a negative acknowledgement (where the receiver tells the sender that the packet was corrupted). Instead, only acknowledge the correctly received ones, and let the transmitter resend a packet if it doesn't get a response.
To do it right all you need is about 50 lines of C code-- I think even the timer is optional if you're clever.
Heavy-duty trucks aren't the worst network environment in the world, but they're pretty bad. Despite this, J1708 works very well-- nearly every truck on the road today uses it.
Just my $0.02.
b.g.
-- William A. Gatliff Senior Design Engineer Komatsu Mining Systems
[FIXME: move to http://en.wikipedia.org/wiki/Geoport ]
I have pushed the Mac serial ports to 1 Mbps; email me for more info. Comments?
the communications protocol for the Mac QuickCam takes advantage of many of the Mac serial ports to communicate at 918 Kbps (synchronous). /* the protocol description *was* available at http://www.connectix.com/connect/files/driver/mac.pdf */
[1.4] How fast can the Macintosh serial ports really go? --------------------------------------------------------
Orignally the MacOS supported up to an asynchronous data rate of 57600 bps though the serial hardware could support much higher transfer rates externally clocked (as much as 16 times synchronously). The AV and Powermac introduced a different SCC clock and DMA based serial driver which allowed 115,200 and 230,400 bps. ...
While the ability to achive these speeds was useful in the days of communications software (see [3.1]) its importance dwindled with the introduction of Intenet communications and PPP (see [5.3]). The reason is that many non-text files on the Internet are already compressed which renders the built in MNP5 and V.42bis compression methods virturally useless. In addition due to limiations in equipment and phone line quality even a 56K modem rarely gets a sustained throughput over 50K.
For these reasons the modem scripts that come with Open Transport have 57600 bps as the maximum serial speed for a modem.
-- comp.sys.mac.comm FAQ http://members.aol.com/BruceG6069/csm-comm-FAQ.txt
David Cary's experiences with a pulse generator hooked to HSKi .... Mac data out (TxD+ and TxD-) changes only on rising edge of clock. (works fine when *input* data changes only on rising edge of clock).
clock specs: 4Vpp seems adequate, but it *must* go below ground. I used a green (all green LEDs are around 2.0 V, it's a physical property of quantum electronics) LED in my level-shift circuit to give me +1 V and -4 V on my clock. It was very distorted (more triangular than square), but it worked OK transmitting stuff back and forth between 2 macs. (using custom software).
_Inside Macintosh IV_ has a nice circuit diagram on p. 249.
The Macintosh Toolbox provides support for serial speeds up to a maximum of 57 600 bps. All my standard modem software is set to 57 600 bps; the 2 modems I use can handle talking to the Mac at that speed.
Technically speaking, however, the Mac serial port hardware can go much faster. You just need special hardware and special software.
I've witnessed a (prototype) gizmo that plugs into the Mac modem port and talks at 1 000 000 bps (roughly 1 Mega-bit / second) with a (1 MHz) pulse generator hooked to HSKi. (Of course, the 2 Macs involved were doing *nothing* but run my communication code).
(The hardware guys cranked it to 2 Mbps -- most characters still made it through, but many were lost.). It went that fast plugged into a PowerBook 520 or when plugged into a Quadra 605.
I got the information I needed to write software for a Mac to interface to this gizmo from the book _Inside Macintosh:Devices_ , a few paragraphs that talk about a "synchronous modem" connection.
_Inside Macintosh_ from Apple says that Mac serial port hardware can be made to go at 500Kbps (900 Kbps or more on most recent models) by supplying a appropriate clock on HSKi. the book seems to indicate GPiA is used for devices with *different* transmit and receive rates. Wierd.
the GPi line ... it appears that the Mac SE has one connected, while the Mac Classic and the Mac Plus has no connection to that line (?).
I used the "officially sanctioned" call from the new Inside Macintosh books,
const Byte external_clock = 0x40; gOSErr = Control(gOutputRefNum, 16, &(external_clock)); /* set bit 6 to enable external clocking */
This works fine on my PowerBook520c, a Quadra, and a PowerPC (the only machines I tested my homebrew program, cable, and oscillator on). Mac data out (TxD+ and TxD-) changes only on rising edge of clock. Communication works fine when *input* data changes only on rising edge of clock. (This was the easiest to do -- I simply connect the same clock to both Macs). (I never got around to testing whether it would still work if input data changes on falling edge of clock -- I figure, if it works, don't fix it).
RS-232 is quiescent *low* (normally low, at -5V on Mac serial port) I have succeded in writing a program that uses a toolbox call "set up Serial A to use HSKi as the receive and transmit clock". The normal serial toolbox calls only let me go up to 57Kbps. They say "it's easy -- just poke the appropriate values in the Z8530 SCC. If you don't know it's address, just dissassemble the Mac ROMs and figure out how Apple did it". But of course they want the thing to work plugged into the serial port of *any* Macintosh, including the 1996 models. _Inside Macintosh_ from Apple says that Mac serial port hardware can be made to go at 500Kbps (900 Kbps or more on most recent models) by supplying a appropriate clock (on HSKi, I think). I *think* it's easier to use the same clock as both transmit and receive, but the book seems to indicate GPiA is used for devices with *different* transmit and receive rates. Wierd. Surely someone, somewhere has done this before; I'd appreciate any programming tips, pointers to magazine articles, books, etc. I'll post a summary of emailed responses, as well as a report on how well they work on my Quadra and on my friend's Mac PowerPC. -dc
Date: Tue, 5 Dec 89 16:36:44 EST From: zben at umd5.umd.edu (Ben Cranston) Subject: Serial port document (long) MIT EE claims it is benign but confusing. Caveat Solderor... This document contains notes on the Macintosh serial port and its use, with concentration on hardware interface issues. *** DANGER WARNING WILL ROBINSON!!! *** The DB-25 on the back of a Macintosh is NOT a serial port! It is a SCSI parallel port. Any attempt to use this connector as a serial port will NOT function correctly and may cause damage to the Macintosh and/or the equipment being connected. The two serial ports of a Macintosh are mini-Din-8 connectors which are labeled with a telephone (the "modem port") and a printer ("printer port"). This is the pinout of the serial connectors. We are looking at the back of the Macintosh (or alternatively at the BACK of a male plug): Macintosh Plus Serial Connectors (Mini-DIN-8) /------###------\ 1 HSKo Output Handshake / ### \ (Zilog 8530 DTR pin) / \ 2 HSKi / Clock Input Handshake or extern clk / [|] [|] [|] \ (Depending on 8530 mode) / 8 7 6 \ 3 TxD- Transmit data (minus) | | | | 4 Ground Signal ground | === === === | | 5 4 3 | 5 RxD- Receive data (minus) | | | | 6 TxD+ Transmit data (plus) \----+ === === +----/ \###| 2 1 |###/ 7 N/C (no connection) \##| |##/ \| |/ 8 RxD+ Receive data (plus) \------###------/ ### Note this is a RS-422 interface so the signals come in a balanced pair, a positive (plus) and a negative (minus), for each data signal. As we shall see below, there is an easy method for matching this to RS-232. We buy the mini-Din-8 connectors at our local electronics surplus store. They cost just under four dollars each, but are not quite as nice as the Apple molded plugs (for example, they don't have the nice orienting-D shape). We are now carefully removing the pins from the connector, soldering the wires to the pin, then replacing the pin in the connector body. We fan out the end of the (stranded) wire into a little umbrella around the head of the pin, then we solder all around. A "third hand" reduces this task from impossible to merely tedious. On the original 128K and the 512K upgrade machines (which have a DB-9 connector instead of the mini-Din-8) the Output Handshake line was held in a "marking" condition by hardware (a small resistor to the appropriate power supply rail). On later Macintoshes there are logic and a line driver for this line. This change introduces the following incompatabilities: 1. SOME of the older terminal programs don't have the code to explicitly drive HSKo high. 2. SOME terminal programs drop HSKo when they close down. 3. SOME modems require DTR and will drop carrier if DTR goes away. If the cable design given below, mapping HSKo to DTR, is used, there are two recognized pathological conditions which can happen: A. Cannot use modem at all, because of 1 and 3 together. B. Modem drops out when switching between terminal programs, 2 and 3 together. Of course, some people consider B a feature, in that it will hang up the phone when you switch off the computer. Personally, I hang up the phone when I am done and I like to switch from terminal program to terminal program. If one of the above conditions happen, there are only three alternatives. I. If at ALL possible, set your modem up to IGNORE DTR and stay connected. Look for a DIP switch for this. I personally made this choice. II. Use only terminal programs which "properly" drive HSKo. You get to operationally define "properly" :-) III. Drive DTR from DSR at the modem end of the cable, as described below. Macintosh to modem (or other DCE device): DIN-8 MALE DB-25 MALE GROUND 4 O--+--------------------O 7 GROUND RECV DATA + 8 O--+ RECV DATA - 5 O-----------------------O 3 RD (Receive Data) XMIT DATA - 3 O-----------------------O 2 TD (Transmit Data) HANDSHAKE OUT 1 O--+ HANDSHAKE IN 2 O--+--------------------O 20 DTR (Data Terminal Ready) Note that in RS-232 the data signals are inverted (marking is minus) while the control signals are not (marking is plus). Thus the transmit data minus signal from the Mac is just right for driving the modem. Leave the transmit data plus signal disconnected. If you ground this you will short out a driver, and it will probably get hot. Similarly the receive data signal from the modem/DCE is inverted, so it can drive the Mac's receive data minus line, but in this case the receive data plus line is grounded to prevent any extraneous signals from being induced into the circuit. Note also that we are driving both HSKi and DTR from HSKo so the problems described above can happen. An alternative arrangement would drive these signals from the modem/DCE's source of DSR, like this: +--O 6 DSR (Data Set Ready) HANDSHAKE IN 2 O--------------------+--O 20 DTR (Data Terminal Ready) Some dumb modems might require Request To Send (RTS) which one would wire like this: +--O 6 DSR (Data Set Ready) HANDSHAKE IN 2 O--------------------+--O 20 DTR (Data Terminal Ready) +--O 4 RTS (Request To Send) Finally, if you have only 3-wire cable and don't need DTR handshake, you can wire each side to be happy like this: HANDSHAKE OUT 1 O--+ +--O 6 DSR (Data Set Ready) HANDSHAKE IN 2 O--+ +--O 20 DTR (Data Terminal Ready) +--O 4 RTS (Request To Send) Macintosh to terminal (or other DTE device): DIN-8 MALE DB-25 FEMALE GROUND 4 O--+--------------------O 7 GROUND RECV DATA + 8 O--+ RECV DATA - 5 O-----------------------O 2 TD (Transmit Data) XMIT DATA - 3 O-----------------------O 3 RD (Receive Data) HANDSHAKE IN 2 O-----------------------O 20 DTR (Data Terminal Ready) The same analysis applies with respect to the data signals, except that in this case the transmit and receive are switched around, since one guy's transmit should be the other guy's receive and vice versa. Note receive data plus is grounded while transmit data plus is left disconnected. For this particular cable we have wired the terminal/DTE's DTR back into the Macintoshes HSKi to implement a hardware handshake. Assume the terminal side is a printer that is being overrun. One of the things these printers can do is drop DTR. By wiring it through to the handshake input we make it possible for the Macintosh software to temporarily pause in sending, until the printer's buffers empty out and the printer reasserts the DTR signal. Some terminal devices may need to see DSR (Data Set Ready) or CD (Carrier Detect) or CTS (Clear to Send), in which case they may be driven >From an appropriate source. +--O 20 DTR (Data Terminal Ready) This is probably appropriate +--O 6 DSR (Data Set Ready) for a communications terminal +--O 8 CD (Carrier Detect) in which DTR is a totally static signal and does not move. +--O 4 RTS (Request To Send) +--O 5 CTS (Clear To Send) or +--O 4 RTS (Request To Send) This is probably appropriate +--O 6 DSR (Data Set Ready) for a printer that flaps DTR +--O 5 CTS (Clear To Send) as the buffer fills and empties. +--O 8 CD (Carrier Detect) The logic is to drive from whichever of DTR or RTS is NOT flapping around as buffers fill and empty or as the terminal transmits and receives... To connect directly to an IBM PC we believe CD must be asserted. That is, an IBM PC will not accept data unless it also sees the CD signal. 128K/512K MACINTOSH Somebody on comp.sys.mac.hardware asked for cables for a 128K/512K Mac! I didn't know there were any more of those out there!!! :-) Here are the corresponding connections, please use these in conjunction with the analysis and suggestions provided above: 128K/512K Macintosh to modem (or other DCE device): DB-9 MALE DB-25 MALE GROUND 3 O--+--------------------O 7 GROUND RECV DATA + 8 O--+ RECV DATA - 9 O-----------------------O 3 RD (Receive Data) XMIT DATA - 5 O-----------------------O 2 TD (Transmit Data) + 12 Volts 6 O--+ HANDSHAKE 7 O--+--------------------O 20 DTR (Data Terminal Ready) 128K/512K Macintosh to terminal (or other DTE device): DB-9 MALE DB-25 FEMALE GROUND 3 O--+--------------------O 7 GROUND RECV DATA + 8 O--+ RECV DATA - 9 O-----------------------O 2 TD (Transmit Data) XMIT DATA - 5 O-----------------------O 3 RD (Receive Data) HANDSHAKE 7 O-----------------------O 20 DTR (Data Terminal Ready) FINAL CLOSURE On the DB-25 pin 1 is the FRAME ground and pin 7 is the SIGNAL ground. Equipment that requires connection to pin 1 is badly designed (IMHO). As a very last resort you might try a 1 to 7 jumper. As you can imagine from seeing all these alternatives, an RS232 breakout box is real handy, since you can try all these patches without having to warm up a soldering iron. The only other thing I can say is: IF IT DON'T WORK, DON'T LEAVE IT TURNED ON LONG ENOUGH TO GET HOT! Communications driver chips are built very ruggedly and will stand an amazing amount of mistreatment for a short period of time. But if you let two drivers fight for an hour one or both of them will burn out... I've read this over a dozen times to make sure there aren't any totally glaring errors, but I cannot be responsible for anybody's smoked hardware. Let's be careful out there! Ben Cranston <zben at Trantor.UMD.EDU> Network Infrastructures Group Computer Science Center University of Maryland at College Park of Ulm
Macintosh Serial Connector (Mini-DIN-8) (looking into *cable*) /------###------\ 1 HSKo Output Handshake / ### \ (Zilog 8530 DTR pin) / \ 2 HSKi / Clock Input Handshake or extern clk / [|] [|] [|] \ (Depending on 8530 mode) / 6 7 8 \ 3 TxD- Transmit data (minus) | | | | 4 Ground Signal ground | === === === | | 3 4 5 | 5 RxD- Receive data (minus) | | | | 6 TxD+ Transmit data (plus) \----+ === === +----/ \###| 1 2 |###/ 7 N/C (no connection) \##| |##/ \| |/ 8 RxD+ Receive data (plus) \------###------/ ### Pin # Name typical color typical color (white cable)(grey cable) 1 HSKo red brown 2 HSKi brown red 3 TxD- green orange 4 gnd yellow yellow 5 RxD- orange green 6 TxD+ black (nc) 7 GPi purple (nc) 8 RxD+ blue black (pin names from _Apple Mac Family Hardware Reference_ 1988) Laplink accelerator: using HSKo (typically +4.2 V) and gnd for power, put -1.5 V to 2.0 V, 0.75 MHz clock on (both) HSKi. Connected a Quadra 605 to a Mac Powerbook 160; 900KHz clock worked OK but 926 KHz failed. (1 HSK0)---diode>|--R--+-->+Vcc->oscillator>-||--+--->(2 HSKi) | | (4 gnd)-gnd----diode>|-+ gnd-------LED>|-+ oscillator>-||--+--->(2 HSKi) | gnd-------LED>|-+
(another pinout document: ftp://ftp.armory.com/pub/user/rstevew/PINS/mac-232.pin )
Date: Wed, 28 Jun 1995 00:15:36 -0500 From: bill at scconsult.com (Bill Stewart-Cole) Subject: Info-Mac Digest V13 #69
In Info-Mac Digest V13 #69, iedh1 at agt.gmeds.com (Dan Hofferth) writes
>Responding to: > >>Date: Thu, 22 Jun 1995 17:13:47 +0200 >>From: thomas at mb.ks.se (Thomas H Eberhard) >> >>Fast modems with budget mac? >> >>LC II (this also apply to the LC and all the Classics including the II and >>colors) According to "Guru" the modemport only support hardware handshake >>on output. Does that mean that I can't get fast input speed even with >>28800 modems?? > >The Mac Plus, LC, and Classic do _not_ support hardware handshaking. The >Classic II and LC II _do_ support it, both send and receive. So your "guru" >is incorrect.
Not quite correct. *EVERY* Mac supports what is called "Hardware Handshaking" in the Mac world. Technically some RS-232 purists will say this is strictly hardware flow control, and NO Mac can implement all the hardware handshaking of RS-232 (which includes hardware ring indication, hardware speed control, and other things) EVERY Mac serial port has a pair of handshake lines, one in and one out. (abbreviated HSKi and HSKo in pinout shorthand) SOME Macs (all but the 'classic' sized models, the //si, and the LC & LC II) also have an additional input on pin 7 (which is disconnected in those other Macs) termed "general purpose' by Apple and designated GPi. This is used by some programs like FirstClass (given the rare correct cable) to do hardware carrier detection.
Hardware flow control is done with the pair of handshake lines in all Mac serial ports. HSKo is run to the modem's RTS (Request to send) pin, and HSKi is run to the CTS (Clear to send) pin. A good modem cable runs HSKo also to the DTR pin, since that is useful for using DTR in uni-directional flow control and for older (non-compressing and slower) modems. A few cables will also wire GPi to CD, useful in a few programs, but sadly that is still rare in OEM cables. The effect of using an old-style modem cable on a fast or compression-capable modem is in fact that you can only implement flow control on the data stream from the computer, whereas the computer has no way (due to the cable, not the computer) to ever tell the modem to stop. This is a serious problem with slower Macs, which can easily fall behind the data rate of a fast modem.
The reason that many people think the Plus, LC, and other Macs cannot do flow control is that Apple's move to the better serial port (with GPi) was somewhat around the same time as the move from vanilla 2400 bps modems to v.42/v.42bis 2400 modems and 9600bps modems. Pure coincidence, but it meant that Apple was making port changes right as people were buying modems that did not work with their old cables. (Some modems even shipped with old-style cables, and in 1990 it was hard to find a 'hardware handshaking" cable except via mail-order) With Apple using a slightly enhanced serial port on high-end models of the day and high-end modems not working right without a special cable, it is easy to mistakenly connect the two.
Of course I could have just said that I ran a Plus and used "hardware handshaking" for 4 years with 3 different modems that demanded it, but that's not so convincing perhaps. The proof is in the pinouts.
-- Bill Stewart-Cole What is Stewart-Cole Consulting? Hell if I know. I'll find out when I finish the web page. Newsgroups: comp.sys.mac.portables From: nirvana at cruzio.com Subject: Re: 150 vs 160 Organization: Cruzio Community Networking System, Santa Cruz, CA Date: Fri, 25 Nov 1994 09:28:11 GMT Sender: nirvana at cruzio.com (Leo Baschy) Lines: 36 rudolf mittelmann <rm at cast.uni-linz.ac.at> wrote: > I tried to use MacRecorder (a serial-device external microphone) on > a PB150 - but it did not work (with the newest MacRecorder driver > installed!). > I also could not get the CP Sound to work with it. > Why? > Did Apple cripple the ROMs to disallow sound input? Or what else > is missing? The problem is that "the" serial port device, the MacRecorder by Macromedia, has code that violates Inside Mac, therefore crashes. "The other" serial port device (a version of Voice Navigator from Articulate Systems) is no longer supported because the company now focuses on high-end automated dictation systems, I've been told. A possible solution would be to fix the code of MacRecorder, but I've tried to convince the manufacturers (it's been transferred from one company to another) for more than four years without result. The new Connectix serial port camera is neat, but the sound is limited to 5kHz, which is not so good. If anybody knows a serial port sound input solution that works for the PB150 I'd be more than glad to know about it and to write about it. We could even help out if anybody wants to fix the MacRecorder code, we have the know-how to rewrite that code from scratch. After all the manufacturer makes money on selling the hardware anyway, so they shouldn't mind if we write software that makes it work. It's just so much work, and little demand. - Leo Baschy nirvana at cruzio.com Nirvana Research (408) 459-9663 --
From cary at agora.rdrop.com Wed Feb 1 19:31:33 1995 Date: Wed, 16 Nov 1994 11:30:54 -0700 (MST) From: Mark Lankton <LANKTON%PISCES at VAXF.Colorado.EDU> Subject: synchronous serial port To: d.cary at ieee.org X-Vms-To: VAXF::IN%"d.cary at ieee.org" Mime-Version: 1.0 Content-Transfer-Encoding: 7BIT Status: O X-Status: David, There is a fairly recent Control call you can make to the serial driver that lets you use HSKi as a clock, at least for receiving. I have never tried transmitting that way; all I have to do is listen to an instrument. The call goes like this: Control(driverNum,16,0x40); /* Set bit 6 to enable external clocking */ By the way, setting bit 7 with this call means DTR will be unchanged when you close the driver (if you ever happen to care). I am right in the middle of building some new hardware to use this method; for years I have used a home-grown get-in-and-fiddle-with-the-Z8530 input driver. It always worked fine *except* for exploding on PowerBooks, and if I didn't have to work on PowerBooks and wasn't worried about maintaining it in the face of the changing device driver API, I would just keep on that way. One important note: the external clock signal apparently wants to be 1x the bit rate, not 16x as you might guess. And I am still trying to decide how important the clock polarity is. Information comes from NIM:Devices. Good luck, and please let me know how it goes for you. Mark Lankton Laboratory for Atmospheric and Space Physics University of Colorado From cary at agora.rdrop.com Wed Feb 1 19:32:31 1995 Date: 22 Nov 94 05:57 EST From: intolabb at oasys.dt.navy.mil (Steven Intolubbe) Subject: high speed serial stuff To: d.cary at ieee.org .Hi David, Do you need to use a synchronous protocol, or do you just need to externally clock the serial port? The other problem is duty cycle of the data. Is it a constant stream, bursts, or do you request what you want when you want it? There is a toolbox call for setting the external clock mode (on HSKi) but it is in the new inside Mac books, which I don't have. I accessed the chip directly to set the external clock mode, and it appeared to work on the 840AV, but the toolbox would be the best way to do it. As far as synchronous modes go, I have also done that, all with direct chip register access, which is was very rude of me. If you want to run a machine in a synchronous mode, you should write a driver to replace apple's serial manager (I didn't do that because I don't know how). Apple has a driver for one of their Personal Laserwriters that was supposed to be close to 1 Mbaud, so if you were in the APDA fold, I'm sure they could help you out. Hope this helps, Steve APDA: (800)282-2732 APDA developers support (408)974-4897
Are the 2 serial ports on your machine inadequate ?
Date: Fri, Jun 3, 1994, 20:51:26
The only serial port boards for the Mac I know of are the "Hurdler" and the "Hustler" from
Creative Solutions Inc. 404-441-1617 4701 Randolph Dr. Suite 12 Rockville, Maryland 20852
There are *lots* of serial port boards for the PC.
http://thairobot.com/interface.htm seems to have information about all the same interfaces ...
http://www.beyondlogic.org/ seems to have information about all the same interfaces ...
Programming The Parallel Port In QBasic http://www.aaroncake.net/electronics/qblpt.htm Programming The Parallel Port In Visual Basic http://www.aaroncake.net/electronics/vblpt.htm
[FIXME: this is about the parallel port, not the serial port ... do I have parallel port information elsewhere ?]
From: helge at siemens.co.at (Helge-Wernhard Suess) Newsgroups: comp.os.ms-windows.programmer.misc Subject: Re: Question about serial ports Date: 12 Sep 1995 11:32:18 GMT Organization: Siemens AG Austria Lines: 26 In <3253010398.350698279@ett.se>, <H.Olsen at ett.se> (Hakan Olsen) writes: >Hello, > >I am writing a program that does some commmunication via modem. My problem is >that I have not found any functions to initialize or communicate with the >serial ports on a PC. Are there any libraries containing useful functions for >serial communication? I would prefere if the libraries were for standard C >as I am not very familiar with C++. > >I am using Borland C/C++ 4.0 > >Any help would be much appreciated! Look up the OpenComm(), CloseComm(), WriteComm(), ReadComm() etc. in your SDK helpfile. For a more elaborate communication (X, Y, Z-Modem) you should get (buy) a communication library supporting some of the common protocols. Helge ;-)=) ---------------------------------------------------------- (c) All Thoughts are Mine -- Genuine Genius ---------------------------------------------------------- helge at siemens.co.at ----- VIENNA -- AUSTRIA ----------------------------------------------------------
Subject: Re: ADB Physical Specs From: st93urnu at post.drexel.edu (Aaron D. Marasco) Date: Fri, 23 Feb 1996 17:24:21 -0500 In article <31233605.5C70@gtri.gatech.edu>, Bryan Dunn <bryan.dunn at gtri.gatech.edu> wrote: >Hello all! > >I'm designing some hardware that will use the ADB interface. Can anyone >suggest a good reference (on the net or in print) that describes ADB in >detail? > >Thanks in advance! > >Bryan "Guide to the Macintosh Family Hardware: Second Edition" Tells everything about the Macs up to the IIfx, including ADB, and a standard is a standard, so it shouldn't have changed (however, this *IS* Apple we're talking about!) Anyway, it is a book I have for a school class, so I have all the info: $26.95 US, $34.95 CAN Addison-Wesley ISBN 0-201-52405-8 I haven't checked any of the online bookstores, you might get it cheaper! Also, it's a paperback, to letcha know.
notes from _Inside AppleTalk, Second Edition_ by Sidhu, Andrews, Oppenheimer[1]. and from AMD's _Analog and Communications Products 1983 Data Book_[2]. and Apple _Macintosh Family Hardware Reference_ (1988) [3]. LocalTalk uses Synchronous Data Link Control (SDLC) frame format and a frequency modulation technique called FM-0. Each bit cell (nominally 4.34 usec -> nominal 230.4 Kbit/sec). differential, balanced voltage signaling over a maximum distance of 300 meters. ... transformer provides ground isolation as well as protection from static discharge. "In the preferred hardware implementation of LocalTalk, a Zilog 8530 Serial Communications Controller (SCC) is used." [1] The classic Macs and the Mac SE [3] both use the 8530 SCC, a 26LS30 differential transmitter, a 26LS32 differential receiver, and a fistfull of RFI filters. The Mac II replaces the 26LS32 with a 75175 differential receiver. (I have no info on other macs) (the Maxim MAX216 Low-Power AppleTalk Interface Transceiver looks interesting). The SCC is clocked at 3.672 MHz hooked to /RTxCa, /RTxCb, and PCLK. on classic, Mac SE, and Mac II. (Mac SE and Mac II has software switching of /RTxCa to the GPiA pin).[3] "don't access the SCC chip more often than once every 2.2 us. ... on the Mac SE [and the Mac II] it is not neccessary to do so because [other circuits ensure the proper delay]"[3] "the transmitter and receiver hardware for LocalTalk is built into every Macintosh and AppleIIGS computer, .... and many peripheral devices ..." "If you are designing your own AppleTalk hardware from scratch, it is easiest to use a 3.6864 MHz oscillator and a Z8530. This has been tested and works just fine." [HW 545 - Serial I/O Port Q&As - Technical Notes - Developer Support ] "LocalTalk hardware can detect a flag byte, the distinctive bit sequence 01111110 ($7E)." [1], but i can't find any reference to this capability in [2]. Well, the hardware guys have done it again. They're making this gizmo that they're hooking to the Mac serial port. They say it communicates "like a synchronous modem". They want it to go at least 500Kbits/sec. "see that Z85C30 on the Mac motherboard there ? the databooks say it can go at 10MHz ... cancha just poke the right values into it? dissassemble the Mac ROMs and figure out how Apple did it ? ... But of course we want the thing to work plugged into the serial port of *any* Macintosh, including the 1996 models." I know it can be done; the hardware guys had LapLink for Mac running on 2 Macs (homebuilt cable between them) "See that O'scope ? they're pumping 750KHz into the HSKi input ... LapLink just transferred a megabyte file at rougly 70KB/s."
[What is the endianness of CAN ? ]
"CAN we talk? : Distributed systems require protocols for communication between devices. CAN and SPI are two of the most common." article by Niall Murphy 2003-05-14 http://www.embedded.com/story/OEG20030509S0042
... a text-based protocol. This is how most of the Internet works; HTTP and SMTP are both built on text protocols. This approach allows the protocol to remain architecture agnostic. Text is less efficient than a protocol where each byte is given a meaning, but the upside is a protocol that's easy for a human to read and debug.
...
The biggest difference between CAN and SPI is that the CAN protocol defines packets. In SPI (and serial interfaces in general), only the transmission of a byte is fully defined. Given a mechanism for byte transfer, software can provide a packet layer, but no standard size or type exists for a serial packet. Since packet transfer is standardized for CAN, it's usually implemented in hardware. Implementing packets, including checksums and backoff-and-retry mechanisms, in hardware hides a whole family of low-level design issues from the software engineer.
...
There are a number of higher layer protocols that have been layered on top of the basic CAN specifications . These include SAE J1939, DeviceNet, and CANOpen. ...
points to the Linux CAN-bus driver project http://home.wanadoo.nl/arnaud/
Open DeviceNet Vendor Association, Inc. http://www.odva.org/ (DeviceNet is a more detailed specification for CAN)
information from "Hard real-time connectivity: It's in the CAN" article by Bruce Boyes in _Computer Design_ 1998-01, p. 88
"CAN is a robust network designed for hard real-time distributed control systems in harsh environments. It's an open standard (and the subject of ISO 11898)..." ... "CAN is a peer-to-peer network ... packets with a maximum payload of 8 bytes ... ... adept at passing around simple commands or small amounts of data quickly... ... not well-suited to moving around large files ..." "can go up to 1 Mbit/s, CAN systems most commonly operate at 250 Kbit/s or 125 Kbit/s, because lower baud rates are more resistant to brief bursts of noise." " fiber-optic cable also is common. ..." "The ISO 11898 document specifies a 120 Ohm nominal impedance using terminated twisted-pair media. ... line length versus baud rate ... ... 40 m, 1 000 Kbits/s ... 1 000 m, 50 Kbits/s ..."
"low cost and relative simplicity" "Kevin Parkinson ... typically uses optically isolated chip-to-bus electronics using RS485 type drivers."
"Emphasis on content" "When data is transmitted by a CAN node, no other nodes are /addressed/; rather, the /content/ of the message (pressure, voltage, temperature, etc.) is designated by the identifier, which is unique throughout the network. The identifier also prioritizes the message. With care, this prioritizing guarantees the most important messages are transmitted with the least delay. ... Arbitration is nondestructive. In the case of an overloaded network, the highest-priority messages still get through. ... Latency ... is also very low. ... CAN hardware also provides message acknowledgment and automatic re-transmission in the event of an error."
"CAN data bits are either "dominant" or "recessive". ... A frame always begins with a dominant-level SOF (start-of-frame) bit. A dominant bit always "wins" over a recessive bit being transmitted at the same time. ... If 2 nodes start transmitting concurrently, each node performs bit-wise arbitration to resolve the conflict. ... A transmitter stops sending if it sends a recessive bit, but monitors a dominant bit. That guarantees a lower-priority CAN device immediately stops transmitting, while the higher-priority device continues unimpeded. The lower-priority device waits for the next bus idle time and tries again. ... 2 nodes [should] never send the same [arbitration field] followed by different data ..."
"... ACK ... indicates sucessful message ... reception by at least one receiver ... the hardware ACK is an important component of CAN's real-time capability."
"For example, a vehicle wheel-speed sensor should transmit its data properly and let other nodes, if any, handle the data as they wish."
"The RTR bit permits any node to request data from another node."
"receiving nodes which detect a problem ... transmit... in the end of frame space. The sender monitors the error flag, which triggers an automatic retransmission by the sending node."
"Honeywell's Smart Distributed System (SDS) ... DeviceNet (initiated by Allen-Bradley) ... ... DeviceNet and SDS also include specifications for rugged cable and connectors."
"The CAN in Automation (CiA) group (Erlangen, Germany) promotes the CAL"
"CAN specification 2.0B describes the extended message frame with a 29-bit ID."
Siemens (SAE81C91), Motorola (MC68376), Intel (i82527), Philips (SJA1000), National Semiconductor (COP87L84BC). offer CAN controllers and/or microprocessors with CAN 2.0B capability.
CAN extended message format (bit field width is not to scale)
| Arbitration field | Control Field | Data field | CRC field | Response field
Arbitration field:
| SOF | 11 bit identifier | SSR | IDE | 18 bit identifier | RTR |
Control field:
| R1 | R0 | DLC |
Data field:
| Data: 0 to 8 bytes |
CRC field:
| 15 bit CRC |
Response field:
| ACK field | EOF 7 bits | INT 3 bits | bus idle (or another node starts transmitting)
SOF: Start of Frame, a single dominant bit SRR: Substitute Remote Request IDE: identifier extension bit is recessive for extended format ID Fields: a total of 29 ID bits for extended frmat RTR: remote transmit request, dominant for data frames, recessive for remote frames. R1, R0 are reserved, dominant bits. DLC: a 4 bit data length code indicates the number of bytes in the data field. DATA: 0 to 8 bytes. A remote frame contains zero bytes. CRC: a 15 bit CRC and a recessive CRC delimiter bit. (Covers what ? just the data ? The entire packet before the CRC ? ) ACK: acknoledgement is a single dominant bit followed by a recessive delimiter bit EOF: End of Frame: seven recessive bits end a frame INT: intermission is 3 bits between remote and data frames
For an extended frame, this is a total of 64 to 128 bits (depending on data size). With 8 bytes of data, overhead (non-data bits) is therefore 1/2.
the various standard keyboard layouts, information on alternative keyboard-like data entry devices, and technical information on making your own devices that plug into the "keyboard socket".
[FIXME: consider moving this information to massmind: http://massmind.org/techref/io/keyboard.htm ]
The human-keyboard interface. Various standard keyboard layouts, information on alternative keyboard-like data entry devices [woefully incomplete]
alternatives to the standard keyboard are useful for wearable electronic devices wearable_electronic.html .
some chairs have a built-in keyboard 3d_design.html#furniture ; all furniture is (or should be) designed with ergonomics in mind.
I'm paranoid about wrist pain. #RSI
[FIXME: put all this ergonomics, keyboard+other, on another page] Other ergonomics
Locate the entire viewing area of the monitor between 15° and 50° below horizontal eye level.
...
... old guidelines ... recommended that the monitor be placed at eye level ... New evidence (and some that has been around for a while) shows that, while the eyes might be most comfortable with a 15° gaze angle when looking at distant objects, for close objects they prefer a much more downward gaze angle (Kroemer 1997). Figure 1 shows the optimum position for the most important visual display, 20 - 50° below the horizontal line of sight, according to the International Standards Organization (ISO 1998).
...
Perhaps the most famous study regarding performance and lighting conditions was done at Western Electric's Hawthorne Plant in Chicago (Mayo 1933). The researchers found that when they increased light level, productivity increased. They also found that when they decreased the light level, productivity still increased. In fact, no matter how they changed the lighting, productivity continued to increase.
The term "The Hawthorne Effect" is now used to refer to the principle that making any change in a workplace can improve short-term performance. The improvement results from just "paying attention" to the workers.
On the other hand,
"The Hawthorne defect: Persistence of a flawed theory" report by Berkeley Rice http://www.cs.unc.edu/~stotts/204/nohawth.html says
...
the Hawthorne effect has a life of its own that seems to defy attempts to correct the record. The story of this myth's growth and its recent debunking contains a moral of caution for behavioral researchers and those who uncritically accept their pronouncements.
... the results ... have been pretty much misinterpreted or ignored for 50 years. Those results conflict with, or at least fail to support, the notion of the Hawthorne effect.
...
subsequent research has failed to duplicate the supposed Hawthorne effect ...
Keyboard ergonomics
I ... like 2 mice / computer, one on each side of the keyboard. It helps control rsi for me.-- Eric Soroos 9/3/2000 http://static.userland.com/userLandDiscussArchive/msg020771.html
They have a section "small keyboards" (most are normal QWERTY style keyboards with regular-sized keys, but without the numeric keypad ... I wonder why they don't list the happy hacking keyboard #happy ).
Weird and wonderful keyboard layouts. [FIXME: make seperate section for ``QWERTY-like'' layouts, that can be used with commodity keyboards with some remapping software, vs. stuff that needs completely different hardware ? ]
IBM no longer sells the keyboard I'm using ... Goldtouch designed the IBM keyboard, and they now sell a similar keyboard of their own:-- recc. Nicholas Riley http://static.userland.com/userLandDiscussArchive/msg020776.html
which are very close to standard flat QWERTY keyboards, to some radically different keyboard-like devices Chording Keyboards http://www.tifaq.com/keyboards/chording-keyboards.html such as
commentary:
http://slashdot.org/articles/00/11/08/2248210.shtml
.
One commenter has this interesting idea:
This looks like the kind of thing you could put footstraps on and type with your feet!
Yay, now I can drink beer AND eat pizza while I write.
and also Contoured Keyboards http://www.tifaq.com/keyboards/contoured-keyboards.html including the
[FIXME: consider mirroring that FAQ -- does that make most of this section now obsolete ?]
... captioners actually use stenographic keyboards instead of real computer keyboards. These keyboards allow them to type a whole syllable by presing 2-4 keys at once, but they are phonetic; there is a K key and an S key, but no C key. (There are actually three sets of keys: consonants on the left, vowels in the middle, and consonants on the right; This allows them to type a whole syllable.) The output from the keyboard is sent to a laptop computer running software that can match a steady stream of syllables with a word list and figure out that and/now/the/we/ther is "And now the weather". Errors like "Loss Alamos" occur when the captioner didn't have the word "Los" programmed in the dictionary. Usually they have a chance to add words before a job to prepare for any unusual words used, but sometimes they don't have time, they forget, or the computer picks the wrong word.
Typing, Fastest. Mrs. Barbara Blackburn of Salem, Oregon can maintain 150 wpm for 50 min (37,500 key strokes) and attains a speed of 170 wpm using the Dvorak Simplified Keyboard (DSK) system. Her top speed was recorded at 212 wpm.
Source: Norris McWhirter, ed. (1985), THE GUINNESS BOOK OF WORLD RECORDS, 23rd US edition, New York: Sterling Publishing Co., Inc.
Barbara Blackburn, the World's Fastest Typist http://www.som.syr.edu/facstaff/dvorak/blackburn.html , http://www.extremespin.com/dvorak/dvorakint/spng96-1.htm#WFT interview
[FIXME: move to learning.html ? Find out more about Michael Shestor's teaching methods ?]if a teaching strategy is wrong, no amount of practice will allow one to get better ... The Internet finally allows a widespread dissemination of exciting learning technologies. With the advent of Internet downloading, everybody now has the opportunity to teach and learn the best systems, in a very efficient way.
the World's Fastest Typist, Greg Arakelian, is exclusively endorsing the SmartBoard. ... a "split" keyboard featuring Darwin's patented key layout ...http://www.darwinkeyboards.com/worldsfastest.htm
The familiar operations of typing, pointing, and clicking are combined seamlessly with multi-finger gesture in the same overlapping area of the TouchStream's surface.http://www.fingerworks.com/ full-size standard QWERTY ... also sells a "mini keyboard" that has the standard QWERTY layout in about half the space.
University Of Delaware Researchers Develop Revolutionary Computer Interface Technology; Fingerworks System Uses Hand Motionshttp://www.sciencedaily.com/releases/2002/10/021010072402.htm commentary: "(Nearly) Zero-Force Keyboard" http://slashdot.org/articles/01/07/10/0220214.shtml [keyboard alternatives] "... get away from the impression that it is a keyboard, and look at it as a generic input surface." and also has some opinions on "a list of things I think the perfect keyboard would have." One opinion claims
contrary to popular belief, it is not the force of pressing keys which causes the problem. Your fingers are built to grab, press and hold things, so this kind of movement is seldomly a big problem. In fact it's the strain of constantly having to hold your fingers up above a too-sensitive keyboard to keep yourself from unwantedly pressing keys that causes most of typing-related injuries. If you want a keyboard that helps your RSI problems, get one that needs MORE force, so you're able to rest your fingers on the keyboard, relaxing the muscles and tendons on the back of your hand and arm.-- cheetah_spottycat [theory]
Another poster suggests:
my perfect keyboard would have ... a headphone jack.
...
It needs to be in the centre, on the edge facing you below the spacebar.
This would prevent the cord from tangling! Please, anyone who makes keyboards ...
-- D4rkm1lk
A 20-key, one-handed text/numeric keypad that uses patented technology to emulate a full-size keyboard ... requires a fraction of the physical space of a QWERTY or fold out keyboard
Infogrip's BAT Keyboard is a one-handed, compact input device;
IntelliKeys Keyboard - Programmable alternative keyboard that plugs into any Macintosh or Windows computer.;
BigKeys Keyboard - a standard keyboard with 1" square keys, 4 times larger than a standard key.
-- Eric Raymond 2001 http://en.tldp.org/HOWTO/Unix-Hardware-Buyer-HOWTO/laptops.htmlthe most important aspect of any laptop is not the CPU, or the disk, or the memory, or the screen, or the battery capacity. It's the keyboard feel, since unlike in a PC, you cannot throw the keyboard away and replace it with another one unless you replace the whole computer. Never buy any laptop that you have not typed on for a couple hours. Trying a keyboard for a few minutes is not enough. Keyboards have very subtle properties that can still affect whether they mess up your wrists.
A standard desktop keyboard has keycaps 19mm across with 7.55mm between them. If you plot frequency of typing errors against keycap size, it turns out there's a sharp knee in the curve at 17.8 millimeters.
... the numeric keypad ... I write programs and prose, but I never enter in columns of numbers. ... Anyway, this vestige of adding machines had to go! It forces the mouse to be about 5 inches farther away from where I'm typing than it needs to be, which is a lot of unnecessary arm movement over the years. So I set about to chop the numeric keypad off the otherwise excellent Microsoft Natural Keyboard. It worked well (on my second try) and took about two hours.
Joystick Gesture Language For Techies: Instead of keyboards, install joysticks and learn a formalised gesture language.http://www.halfbakery.com/idea/Joystick_20Gesture_20Language_20For_20Techies
Some other interesting discussion on input devices: ``What IS a good way to get data into tiny things ?''.
I think of something more like Key-Glove http://wearables.blu.org/keyglove.html The cheapest wearable keyboard on earth.
or perhaps "Essential Reality's P5 glove controller" http://www.pcworld.com/news/article/0,aid,102200,00.asp ... http://essentialreality.com/
Using the right input device for the right job is crucial. Otherwise we will never be able to get the non-initiated to use them.
People not "in the know" still wonder how a Palm Pilot can survive without a keyboard. The answer is really simple: the software is written such that using the stylus becomes second nature. Same as with the Millipede example... the software was written for a specific input device.
Maybe neurocomputing will allow people to get information into a computer faster than is currently possible (I doubt so, but I'm willing to be proven wrong!), but that is not available right now. Keyboards have worked for a nice long time and will probably be ubiquitous for a time being.
Another reply:
Failure to notice awkwardness (Score:1, Insightful) by Anonymous Coward on Tuesday November 30, @03:52PM EST (#307) There's another UI-related lesson I learned from Quake. Back when Quake 1 deathmatches were beginning to become popular, there was always a crucial turning point in everyone's gameplay: the point when they set up the Correct Bindings. The details varied from person to person, but there are some things that all Correct Bindings had in common. One hand was on the mouse, which was permanently in mlook mode. The other was on the keyboard. The keyboard hand could strafe left or right without repositioning; often, the "strafe" keys simply replaced the "turn" keys, since the mouse was perfectly adequate for turning. For Quake 1, this was the right tool for the job. Nothing else provided the speed, precision, and flexibility needed for deathmatch play. Now, there's someone I know who didn't like using the mouse, even for Quake. I argued the point with him, and he claimed that the keyboard was perfectly good and easier to control. He didn't do deathmatch, but he had played through the entire game in single-player mode. I watched him play. When he wanted to aim up or down, he stopped stock-still and tapped the turn-upward and turn-townward keys lightly several times, often overshooting and tapping the opposite key. It looked incredibly awkward, and would have gotten him killed in deathmatch play, but - and this is the point - he was completely unaware of it. As far as he was concerned, he had a desire to do something, and this translated directly into action. It just so happened that the action was awkward and slow and only worked because the monsters were stupid, but it worked, and that was enough. His brainstem was satisfied. There's a lesson in this for all of us who think our familiar user interfaces are "good enough". (Too bad there's no deathmatch mode for text editors. We'd settle the emacs vs. vi issue in a hurry then.)
And another:
The Right Interface for the Job (Score:4, Insightful) by jalefkowit (jalefkowit@NOSPAMaol.com) on Tuesday November 30, @09:57AM EST (#28) (User Info) The problem isn't limited to input devices. This article got me thinking about something I've been wondering about for awhile -- the recent tendency to use a 'standard' interface for various tasks, rather than a purpose-built, optimal interface. It seems like there are dozens of companies these days that want their interface design to be the One True Interface to All Things. The best example of this is Microsoft, which every couple of years makes noise about how toasters and refrigerators should be controlled with some variant of Windows. But MS isn't the only offender -- lots of Internet companies do this too, by forcing you to use an HTML front-end to their resources rather than designing software for the purpose. Don't get me wrong, I can see the reason for this approach -- once you've learned the One True Interface, you're set, you don't have to learn anything else. The problem is that trying to force all devices to share the same interface means that some of those devices are going to feel clunky -- or, worse, be downright unusable. Take, for example, the whole WinCE vs. PalmOS war. On its face, you'd think people would prefer WinCE devices, since they're already familiar with the Windows interface. But (based on my observations, not any hard research) it seems to me that people vastly prefer the Palm interface, which is optimized for handheld devices, rather than Windows, which really wants you to have a big, roomy display to work well. In other words, people are willing to learn a new, unfamiliar interface if doing so offers them substantive productivity benefits -- which would seem to give savvy product developers an incentive to follow Mr. Christiansen's advice to optimize the interface for the task. This trend is only going to get worse as computing intelligence is embedded in more and more consumer devices. The temptation will be very strong for those developing software for such embedded systems to leverage interface designs they already have, rather than create from scratch. With more and more of a car, for example, being run by software, it's not hard to imagine MS someday proposing that you run your AutoPC through a modified Windows interface, even though such an interface would be totally inappropriate for the task at hand. Let's hope that more product & software designers take note of the evidence that people prefer optimized interfaces and don't automatically rule them out. -- Jason A. Lefkowitz "A statesman... is a dead politician. Lord knows, we need more statesmen." -- Bloom County
all caps (Score:2) by kuro5hin (rusty@please.spam.me.intes.net) on Tuesday November 30, @12:10PM EST (#184) (User Info) http://www.itchmagazine.org/rusty No, I don't use caps for much. Only for global constants, really, and even then, sometimes I don't. I'm a perl programmer, by the way. I started, however, as an HTML jockey, and during my servitude with that miserable beast, I got so I can type in all caps, just by holding down the shift key, almost as fast as I can type without holding it down. I was always in the "HTML tags are capitalized and that's that" school. So, the caps lock key is thoroughly useless and should, indeed be banned outright. The only thing it appears to be good for is getting in the way of the tab key and making me capitalize a whole line instead of moving it four (that's pronounced "The One True Tab") spaces to the right. ----
Is this following on from the poll? (Score:1) by anthonyclark (anthony dot clark at adv dot sonybpe dot com) on Tuesday November 30, @10:04AM EST (#51) (User Info) from My comment from the last poll. What I want in a keyboard: Ergo/split design. All the programming symbols on their own keys ($#|{}[]()<>?@) Silent, soft but clicky keys. I hate noisy typists! A whole bunch of keys with durable 32x32 LED panels on them, programmable to display different symbols. (so I can reprogram the windows key to be a penguin without buying a new keyboard) (or so I could program a key to do C-c C-f for open or C-c < for docbook) Lasts a lifetime Function keys below the space bar Large keys so I don't miss On-board memory for keymappings and symbols (see above) Statistically designed using only programmers as the sample. This should give a keyboard with all those funny symbols in nice convenient places. AFAIK the dvorak keyboards were designed statistically with the most frequently used keys closer to the fingers. why not do this now, using programmers as the sample? It could run a bit like the SETI programme, with users installing a little daemon that just records how many of each key was pressed and then sends that back to a central server... There should be no security risk as all that would be sent would be statistics, not something like the output from "script"... Or maybe I need to think this through more ----- I'm an INFP, what are you?
This is a MIRROR site. The original went offline in September of 2001
for Windows 98,
! @ # $ % ^ & * ( ) { } 1 2 3 4 5 6 7 8 9 0 [ ] " < > P Y F G C R L ? + ' , . p y f g c r l / = A O E U I D H T N S _ a o e u i d h t n s - : Q J K X B M W V Z | ; q j k x b m w v z \
For Windows 3.1, Macintosh, DOS, X Window, etc. see "How to Remap Your Keyboard" are available from http://www.mit.edu:8001/people/jcb/Dvorak/ .
q w e r t y u i o p a s d f g h j k l BSP CAPS z x c v b n m ^ ENT [] [] [] [] space [] < V > | | | | | | | | | FUNC DEL arrow keys | | NUMLOCK | ALT CTRL
I imagine it would be pretty easy to convert to Dvorak.
Having to hit 2 keys to get `;' is difficult for a C programmer; and having to hit 2 keys to get "esc" is difficult for a "vi user". Perhaps getting making the key to left of z do "ESC" would be better for a "vi user". software_david_uses.html#vi
... Three competing companies -- VKB of Jerusalem, Israel, Canesta of San Jose, CA, and Virtual Devices of Pittsburgh, PA -- are selling products that use lasers to project an image of a full-sized QWERTY keyboard on a flat surface. Optical sensors then track the user's finger movements and translate them into keystrokes on a screen. ... The machine-vision software that goes into such systems is so complex that it could easily handle other tasks such as facial recognition. ...
You'd need both hands to count the methods inventors have proposed for typing without keyboards. Pressure-sensitive gloves, finger rings and “air” gloves that use fiber optics to detect finger curvature are among the many that will never leave the lab. But once other manufacturers started making tiny, low-cost optical sensors, Canesta's engineers were convinced they had the answer, says Bamji.
let's declare the mouse obsolete ... worst pointing device ever invented.http://www.halfbakery.com/idea/Mouseless_20computers
Douglas Engelbart invented the computer mouse in 1963.[FIXME: keep lumping all human-to-computer input devices here, or make seperate sections for keyboards, pointing devices, etc ?]
-- http://slashdot.org/comments.pl?sid=99/11/30/0954216&threshold=0&commentsort=0&mode=thread&pid=13#84 [FIXME: polish into haiku ?]Re:This is no longer the case with me. (Score:1, Funny) by Anonymous Coward on Tuesday November 30, @11:29AM EST (#146) You can drive nails into cement with this thing [original IBM XT keyboard], and it will still work. You can spill hot coffee and sweet sticky soda on it. It will work for years after that. But NEVER EVER EVER take one apart. Never. It will explode in a shower of little springs You will never reassemble it.
The keyboard-PC interface. Technical information on making your own devices that plug into the "keyboard socket" (or making a socket to plug standard keyboards into your device). Includes serial protocol, numlock light commands, etc. See also "wedges" .
Subject: [PIC]: Keyboard decoder ... Lionel Theunissen <lionelth at BIG.NET.AU> on 2001-04-26 10:12:07 AM Please respond to pic microcontroller discussion list <PICLIST at MITVMA.MIT.EDU> cc: (bcc: David Cary/TULSA/BRUNSWICKOUTDOOR) > Date: Sun, 22 Apr 2001 23:44:58 -0700 > From: Nathan J Berg <bergnj at PLU.EDU> > Subject: Re: [EE]: Keyboard decoder chip > > I am currently trying to do the exact same thing. I am trying to use my > 16f84 to push scan codes into my PS/2 port when I bring a pin low via a > push button. I am going to be hard coding the value of the keys in the > code to keep it simple. > > I have tried to find example code, but came up empty. Thus I am trying to > write the algorithm in LET PIC basic. If anyone has a better way I am also > very interested. Thank you! > > Nathan I"ve written a full serial AT/PS2 Keyboard serial terminal program for the 16F84. The full source code in PIC assembly language is available at: http://www.dontronics.com/zip/newterm.zip Don't know if that's exactly what you want, but it's worth a look. Lionel... -- http://www.piclist.com hint: To leave the PICList mailto:piclist-unsubscribe-request@mitvma.mit.edu
IEEE1394, also called "FireWire"
"1394 right now is specified to run up to 400Mbits per second and 800Mbits in the future. It will be plug and cable backwards compatible. "
Often used to transfer movies from digital camcorders machine_vision.html#digital_cameras ... machine_vision.html#IEEE-1394_digital_camera
See also FIXME serial ADCs and DACs.
RS-232 to PC keyboard (often called "wedges" -- the most common versions allow you to plug a barcode reader FIXME barcode (RS-232) and a standard PC keyboard into the wedge, and then you plug the wedge into the keyboard port of a PC.)
Other protocol converters:
Often used to transfer still images from digital cameras machine_vision.html#digital_cameras . [FIXME: move information to a wiki ?]
"USB1.1 Integrated Circuits & Development Boards" http://beyondlogic.org/usb/usbhard.htm by Craig Peacock 2005 has a long list of USB ICs and some development boards to get USB quickly working, including several USB-to-TTL-serial and USB-to-8-bit-parallel. ... A few of them claim "sustained data rate of up to 12 MB/s".
"Add USB in 10 minutes!" http://beyondlogic.org/usb/ftdi.htm for under $50 (?), serial modules up to 3000 kbaud (RS485), parallel modules up to 1 MByte/s.
[FIXME: I know I have more MIDI info scattered elsewhere ... move it here.]
1 XY1 (+5v) 4 Ground 5 Ground 9 XY2 (+5v) 12 MIDI TXD (transmit) (computer -> midi) 15 MIDI RXD (midi -> computer)
SCRAMNet (Shared Common RAM Network) looks interesting.
MIL-STD 1553 is a military avionics bus.
From: "Cees" <cakoolen at dds.nl> Subject: rs485 coding Delphi 3 Date: 26 Jan 2000 00:00:00 GMT Organization: News Service (http://www.news-service.com/) Newsgroups: comp.arch.embedded ... I am making a progam to communicate with a rs485 bus. The problem seems = to be that i have to disable the transmitter and enable the receiver at = the right moment. With the component i am using i cant see when the tx = register is empty, so i cant do that. pease help...! Cees From: jan axelson <jan at lvr.com> Subject: Re: rs485 coding Delphi 3 Date: 26 Jan 2000 00:00:00 GMT Organization: @Home Network Newsgroups: comp.arch.embedded "Frederic Chaxel" <chaxel at cran.u-nancy.fr> wrote: >I don't know using Delphi but if you can talk with the Windows device driver >(CreateFile, WriteFile, ReadFile, CloseHandle), you must set > fRtsControl = RTS_CONTROL_TOGGLE in the DCB structure >which is passed to SetCommState. Then RTS will automaticaly be >put high and low when bytes has to be send via the UART. >This signal is generaly used by simple RS232 to RS485 converter > in order to get the line or to put it in HiZ. Unfortunately, this doesn't work under W95 (knowledge base article Q140030). There are various alternatives, some requiring supporting hardware: 1. Keep the receiver enabled and read back what you sent to find out when it's OK to disable the transmitter (by controlling the line in software). 2. Use various hardware schemes to control the transmit-enable automatically. 3. Use a delay timer to estimate the needed time, with a margin of error, and control the line in software. I have many links to RS-485 information at: http://www.lvr.com/serport.htm Jan Axelson http://www.lvr.com jan at lvr.com [Option #1 is highly recommended by article ____ [FIXME] in _Embedded Systems_ by Nigel]
A product from that company that uses their recommended interface:
"485hub1: RS485 Active Hub (5 port) ... * 5 signal buffered party line data ports * Industrial grade pluggable wiring block connectors * Handles all NRZ protocols (Protocol independent)" http://airborn.com.au/photo1/ab192.html based on a 8051-style microcontroller ... uses a Littelfuse SP720 voltage clamp ...
Rather than using fuses, DAV
From: bobgardner at aol.com (BobGardner) Subject: Re: RS-232 Autobaud Date: 26 Nov 1999 00:00:00 GMT Newsgroups: comp.arch.embedded >Does anyone know an afficient way to implement a simple autobaud algorithm >over serial port? Make a table of what a carriage return looks like at all the baud rates you want to recognize.
From: tony <Fuzzy_Wombat at excite.com> Subject: Re: RS-232 Autobaud Date: 27 Nov 1999 00:00:00 GMT Newsgroups: comp.arch.embedded ... Easiest thing, if your micro lets you, is to turn the UART off (make it into a normal port pin) and actually _time_ the bit lengths. Tony
From: jan axelson <jan at lvr.com> Subject: Re: RS-232 Autobaud Date: 29 Nov 1999 00:00:00 GMT Newsgroups: comp.arch.embedded 1. One end repeatedly sends a character, which may be anything from Chr(0) through Chr(127), at the desired bit rate. (The MSB, which is the last character sent, must be 0.) The other node tries to detect the character, beginning at its highest bit rate. If the receiver's bit rate is too high, it detects more than one Start bit for each character sent. When the receiver detects the correct character and no others, without framing errors, it has the correct bit rate and can send a code to acknowledge. 2. If the receiver can measure the received bit widths, the sender can send a predetermined character and the receiver can measure the widths of the signals and calculate the bit width from there. This the method used by the 8052-Basic chip, which looks for a carriage return.
some links I stumbled over on a quick search of Google for Barker code info.
``The 11-chip Barker code (+ + + - - - + - - + -) specified by the IEEE 802.11 standard
I. Bar-David, ``A Method and Apparatus for Spread Spectrum Code Pulse Position Modulation,'' U.S. Patent applied for August 1994 to be issued early 1997.
I. Bar-David, and R. Krishnamoorthy, ``A Spread Spectrum Code Pulse Position Modulated receiver Having Delay Spread Compensation,'' U.S. Patent applied for 1994-11 to be issued early 1997.
and mentions these bit sequences (linear PN codes): Gold codes, small set of Kasami, large set of Kasami, Barker codes, Golay complementary, maximum-length sequence.
mentions the importance of rapid synchronization. (similar to clock recovery).
2. A receiver is matched to a single 7-baud binary Barker code (+ + + ¡ ¡ + ¡ ). (a) What will be the output of the filter if the input consists of two contiguous ("touching") Barker codes, each of length 7, assuming no Doppler shift? (b) Repeat (a) for the case when the two codes are separated by exactly one baud length and the second is inverted in sign; i.e., the input sequence is (+ + + ¡ ¡ + ¡ 0 ¡ ¡ ¡ + + ¡ +). The receiver decoder (filter) remains the same as for (a). You should notice that some of the range sidelobes disappear in the second case. These sorts of probing sequences are used in ionospheric remote probing experiments at Arecibo and other large "incoherent scatter" radars. 3. Because of its very slow rotation rate, it is possible to map essentially the entire surface of Venus using the delay-Doppler technique. Mars, on the other hand, rotates as fast as Earth does, with a Martian "day" equal to 1.03 Earth days, and so the planet is "overspread", meaning that one cannot simultaneously avoid range and frequency aliasing over the whole surface of the planet. It is possible to map a portion of the surface fairly well, however. Suppose we decide to ignore the range aliasing for the leading portion of the return (from points near the subradar point), because the echo strength falls off rapidly with delay. In other words, when we are receiving echoes from two (or more) range "rings" on the planet simultaneously, the echo from the nearest ring will dominate, and the "clutter" from the farther ring (or rings) will not cause too much degradation of the signal.
Think serial communication is serious ? Hah.
"Connectix (Virtual PC), Casio (Cassiopeia), and Microsoft (Windows CE) all told me it couldn't be done, but I just synchronized my HPC (a Casio Cassiopeia Windows CE machine) with a Power Macintosh running Virtual PC. "
The serial cable that comes with the Cassiopeia has a 16-pin connector on one end, and a DB9 (female) connector on the other. You will need to add to it:
- 1.A serial com port adapter, DB9 (Male) to DB25 (Female) ($7.99 at Fry's).
- 2a. A Hayes Modem cable (NOT a null modem cable), DB25 (Male) to Mini-DIN 8 (Male) ($6.95 at Fry's).
- 2b. Alternatively, the cable provided with the Casio QV cameras which they supply to enable connection of their camera cable to a Mac serial port worked fine for Adel Malek, MD, PhD (thanks, Adel), and can be ordered from Casio. It makes for a very short simple adapter (only about 5-6 inches long).
NOTE: A printer cable doesn't work -- not enough of the pins are connected. And neither does the adaptor cable that comes with the Palm Pilot Mac Pac (Michael went back and forth between it and the configuration above several times, and the adapter always failed). "Yeah, but it's DB9 to mini-DIN 8 -- it should work!". Go ahead; it's your time to waste. If anyone finds a large-supplier source for a DB9 to Mini-DIN 8 connector that correctly does the same thing as 1 and 2 together, PLEASE let me know so I can update this page.
"Windows CE Serial Port FAQ" http://www.cewindows.net/wce/serial.htm by Chris De Herrera
From: <cmpethic at ibeam.intel.com> (Chris Pethick) Newsgroups: comp.sys.mac.comm,comp.sys.mac.programmer.misc Subject: Re: PowerMac serial port Date: 26 Jan 1995 23:25:15 GMT Organization: Intel Corporation > My question: can the PowerMac actually handle the speeds I am looking for, > and if so, how? Are there utilities for this, or will I have to write my > own device driver to reach these speeds? The PowerMac serial port can indeed handle the speeds you are talking about. However, the only way I know to get those speeds is to provide an external clock signal on the DTR line of the serial port in question. There is a serial driver control call which enables externally clocked mode. This call is documented in one of the Tech notes. In order to get your desired 64000 char/sec you will need to clock the port at about 640KHz (or more). The PowerMac serial ports are capable of handling several megabaud in this fashion. Chris Pethick
From: "Peter Zechmeister" <zechm002 at gold.tc.umn.edu> Subject: Re: Proposal for a Home Automation protocol using PIC 16C84, RS485, and Linux. Date: 04 Sep 1999 00:00:00 GMT Organization: University of Minnesota, Twin Cities Campus Newsgroups: comp.home.automation,comp.arch.embedded Check out "S.N.A.P - Scaleable Node Address Protocol", its a protocol for networked PIC's etc, that can be connected to PC's, and is mapped onto TCP/IP, so ...... dream on http://www.hth.com/snap/ -- Peter Zechmeister - <zechm002 at gold.tc.umn.edu> - A University of Minnesota Alumnihttp://www.hth.com/snap/
From: Dmitri Katchalov <dmitrik at my-deja.com> Subject: Re: RS232 without flow control? Date: 27 Jan 2000 00:00:00 GMT Newsgroups: comp.arch.embedded,sci.electronics.basics,sci.electronics.design ... > > So am I crazy in not incorporating any flow control (RTS/CTS, DTR/DSR)? > > Well, it depends on the data format! > > If you miss a character in a decimal number, it is not the > problem that you have NO data, but wrong data. How would you > distinguish missing decimals? The same applies to multibyte > binary data. Easy. You'll get FIFO buffer overrun bit set in your UART. You will need a unique char sequence to identify the start of transmission so the receiver can re-synchronise after buffer overrun. If you're sending ASCII <CR> seems like a good choice. Dmitri Sent via Deja.com http://www.deja.com/ Before you buy.
That standard is IEEE-1394, also known as FireWire or i-Link.'' -- Eric Smith 2000-02-16
DAV: this is obviously far more scalable than most wireless networks ... http://www.cs.hut.fi/Research/Dynamics/ Is this related to http://www.cs.pdx.edu/research/SMN/ ?In MANET, every host is also an IP router for the other MANET hosts. This enables users to access the wired network using other users' mobile hosts as relay stations beyond the direct wireless link-level (radio) range of the destination host. Also, with the MANET approach, the hosts are not tied to one link-level technology, and their IP addresses are not restricted to one subnet. With this technology, a cost-efficient broadband access to the Internet can be provided. However, the technology is not yet ready; issues like security and distribution of costs must be addressed.
Figure two shows you the six most popular IBM to LaserWriter cable lashups. The six combinations result since you can have a DB-25 or DB-9 connector on the host and a DB-25 using RS-232-C or (depending upon your LaserWriter model) a DB-9 or a Mini DIN-8 connector using the RS-423 interface serial standard.
CEA R7.3 (Consumer Electronics Association); HomePlug Alliance; HomePNA (Home Phoneline Networking Alliance); X10; CEBus; LonWorks;
and some wireless (RF and infrared) media protocols. [FIXME: machine vision ?]
The design fits 26 letters of the alphabet, the * and #, 10 numbers, three punctuation keys, a space bar, shift and delete key into an area no larger than one-third of a business card.
... Digit Wireless founder David Levy ... Fastap ...
Digit Wireless has also developed a version for Japan that allows the keyboard to represent the 120 characters of the country's languages.
Mr Levy said it reduced the number of taps needed to form Japanese characters from eight to two.
-- http://www.mt-rainier.org/Mount Rainier enables native OS support of data storage on CD-RW. This makes the technology far easier to use and allows the replacement of the floppy. This is done by having defect management in the drive, by making the drive 2k addressable, by using background formatting, and by standardizing both command set and physical layout. The new standard is promoted by Compaq, Microsoft, Philips, and Sony and is supported by over 40 industry leaders
The GestureWrist, developed by Jun Rekimoto of Sony's Computer Science Laboratory in Tokyo, Japan, uses sensors embedded into a normal watch strap. These track a wearer's arm movements and the opening and closing their hand, relaying this information to a computer kept somewhere on their person.
The result is that, instead of relying on a computer mouse, the wearer can move a pointer around a computer screen and click on icons using only arm and hand movements.
Date: Thu, 19 Dec 2002 09:52:49 +0000 To: pci-sig at znyx.com From: Paul Walker <paul at walker.demon.co.uk> Reply-To: Paul Walker <paul at 4Lijnks.co.uk> Subject: Re: What's your bus?
Alan
A common feature of the ones you mention is that they are not buses, they use switch fabric.
So does SpaceWire, a derivative from IEEE 1355, that is simpler and more flexible than any of those you mention. It was way ahead of its time, but the plethora of new standards, all of which follow it and all of which are more complicated, just show that its time will come.
You can reach most of the information about SpaceWire and IEEE 1355 from our web site: www.4Links.co.uk
Best regards
Paul Walker
...
Alan Deikman <Alan.Deikman at znyx.com> writes >I'd like to hear back from anyone with an opinion on this. What do you >think of as the ultimate bus after PCI and why? > >1. Hypertransport >2. RapidIO >3. Star Fabric >4. PCI Experss >5. Infiniband > >Any others that will be players? I get asked this sort of question all the >time and I need some new material. :) > >Alan Deikman >ZNYX Networks, Inc. > > -- Paul Walker CEO, 4Links Limited, Chair of the 1355 Association www.4Links.co.uk www.1355.org <paul at 4Links.co.uk> 4Links Limited --- Boards, chips, IP and consultancy ... for Links P O Box 816, Bletchley Park phone +44 1908 64 2001 Milton Keynes MK3 6ZP, UK fax +44 1908 64 2011
Is there a standard for connecting the TX, RX, GND of RS-232 to the 4 pins of a RJ-11 connector ? Paul Campbell says "I wired the GND to the yellow line, TXD to the black line and RXD to the red line." http://www.taniwha.com/~paul/fc/ass2.0.html
Every "record" is tagged with a (local) time. Every time it is *modified*, the time is reset to (local) "now". (A "record" could be an entire file, or it could be a single row in a database).
Each "record" is tagged with a name. It might be nice if this name is a globally unique number. (Perhaps the ethernet number of the machine it was created on, plus some sequence number to distinguish all the records ever created on that machine). (Sequence number can be the same as what the clock of the machine where it was created said at the instant it was created -- which may be *different* from the "time" tag, because of both (a) local time doesn't align with the time at the machine where it was created, and (b) the record may have been modified.)
Every machine keeps track of everyone it has ever synchronized with, and exactly when that happened (local time).
During synchronization (while synchronizing):
* Check to see that "local time" on this machine and "neighbor's time" on other machine seem reasonable -- At minimum, "local now" should be *after* "local last-time synchronized with that neighbor" on both machines. Perhaps it would also be nice for each side to find the *time* since the last synchronization using local data, and make sure both side agree on roughly the same elapsed time. Perhaps go even further and make sure "now" is roughly the same on both machines (if they are both supposed to be synchronized to GMT).
The very *simplest* thing to do is simply send *all* my data over. While recieving *all* my neighbors data, I should see:
* "old, unchanged data, the same on both sides": The copy I send over has a date *before* the last time I exchanged data with that machine. The copy he sends back has a date *before* the last time he exchanged data with me. (If we're both synced to GMT, both copies have the *same* date). Doesn't really need to be sent. Already the same on both sides.
* "new items created on my side": The copy I send over has a date *after* the last time I exchanged data with that machine. He never sends me a record with that name.
* "new items created on his side": He sends over a copy with a (remote) date *after* the last (remote) time I exchanged data with that machine. I don't have any records with exactly the same name. I store it with some arbitrary local time *before* "now", perhaps (now - 1 tick), perhaps some approximation of the (GMT) time it was created.
* "items that were deleted on my side": He has an old copy dated *before* the last time I exchanged data with that machine. I don't have any records with exactly the same date. Delete record on both sides.
* "items that were deleted on his side": I have an old record dated *before* the last time I exchanged data with that machine. He doesn't have any such record. Delete record on both sides.
* "items that were modified only on my side": I have a record with a modification date *after* the last time I exchanged data with that machine. He may have a record with a (remote) date *before* the last (remote) time I exchanged data with that machine. Replace his obsolete record with my newer record.
* "items that were modified only his side": similar.
* "other -- conflict": something odd happened. Keep both copies of the record; try to get the user to merge them into one new (freshly-modified) version. (What exactly is the situation(s) here ?)
Will this really work if we have *3* or more machines trying to synchronize with each other (pairs at a time), where any record can be modified on any machine ?
Chris Harrison has developed Skinput, a way in which your skin can become a touch screen device or your fingers buttons on a MP3 controller. ... ... in the Human-Computer Interaction Institute at Carnegie Mellon University, Harrison says ... The team has created its own bio-acoustic sensing array that is worn on the arm ... Harrison explains that when a finger taps the body, bone densities, soft tissues, joint proximity, etc, affect the sound this motion makes. The software he has created recognizes these different acoustic patterns and interprets them as function commands. ...
This page started 1997 and has backlinks
David Cary
Return to index // end http://david.carybros.com/html/serialportdocs.html