Freeman Craig Presson, <dhr@iname.com>, writes:
> $ ps -ef
> halfinn 0 10834 Jun 08 0:01 univgen
> root 6 5838 Jun 13 0:06 telnetd
> nanogrl 0 5838 Jun 25 0:00 -host8.wsfa.com: nanogrl: aminomodel -c
> nanogrl 6 5838 16:10:53 0:00 -host8.wsfa.com: nanogrl: vworld
> fcp 8 5838 Jun 22 0:00 -fc1-49.netup.cl: gnuchessv200.3
> halfinn 6 36456 Jun 13 0:00 -univgen
> fcp 6 5838 Jun 23 0:00 -159.235.8.45: mediatron -ch Remedial-
> physics
> fcp 8 5838 Jun 23 0:00 -159.235.8.45: mediatron -ch Vorgy
> root 6 5838 11:39:39 0:00 telnetd
> nanogrl 4 5838 Jun 24 0:00 -host9.wsfa.com: mediatron -ch Vorgy
> fcp 2 5838 Jun 23 0:00 -194.224.244.49: make -k univgen.cpp
> anders++ 2 5838 Jun 22 0:00 -cox.com: anders++: STOR univ3.2
I love it! I want to know what this Vorgy thing is that nanogrl is running though...
> You said that right at the end -- our upload host will be the ultimate
> PERSONAL computer; we'll be gravely concerned with its security and
> reliability. We'll also want all the raw power we can get. Processors will be
> cheap, there will be processors everywhere, maybe one per
> neuron/synapse in the neural net part, and way more than we need to run
> the rest of it (assuming a hybrid machine, part NN and part symbolic).
Even where we do have enough resources for an optimally secure OS, the main question remains whether philosophical considerations of how consciousness works will constrain the architectures we adopt. I have a story (which I've told before) about an interesting architecture which illustrates this point.
Years ago I worked in the parallel computing business, making hypercube supercomputers. I was in charge of the OS. One of the groups we were working with was at JPL, and they had their own OS design, which they called Time Warp.
Time Warp ran on a parallel processor that was designed to simulate systems with mostly local interactions but occasional distant effects. I think their contract was something related to Star Wars missile defense.
Parallel processor systems work well with local interactions, but when there is a need for global effects they slow way down. After each step of the calculation, every processor has to stop and wait for any messages to come in from distant processors in the network. Then they can go on to the next step. Most of the time there aren't any such messages, but they have to wait anyway. This ends up running very inefficiently, and it gets worse as the network grows.
The idea of Time Warp was that the processors wouldn't wait. They used a technique called "optimistic execution with rollback". What they would do is to proceed with their calculations on the assumption that there would be no messages from distant processors. This is usually true and so they run very quickly.
The problem is of course that when a message arrives, it is too late. The processors have gone on and calculated what would happen assuming no such message existed.
For example, suppose the processor has calculated up to time step 2108, and here comes a message stamped with time step 2102. We were supposed to handle it then but we have gone too far. What we do is to roll back the processor state to the previous checkpoint, which may have been, say, 2100. From that point the processor can go forward to 2102 and then handle the incoming message, and go on from there. The earlier run from 2102 to 2108 is discarded and has no effect on the rest of the simulation.