From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Wed Sep 08 1999 - 16:40:46 MDT
On Wed, 8 Sep 1999, Jeff Davis wrote:
> In his reply to den Otter,
>
> I wrote:
>
> > I'm not sure that this is the "consensus". If we got the exponential
> > growth of nanoassembly tomorrow, we would have *neither* (a) the
> > designs to build stuff, nor (b) AI to populate the nanocomputers.
>
> This seems to suggest that you need ai to populate the nano computers,
> and I'm not sure that this is the case.
Sorry for any confusion, I did not mean to suggest that you in any way
need AI to control Nanoassemblers or Nanobots. However, unless our
engineering design programs get very "smart", AI certainly would be
useful for us to be able to say (in the voice of Scotty), "Computer,
design me a yacht".
I think our engineering programs are going to get to the point where
they could design at the nano-level all standard mechanical engineered
"parts" (gears, screws, wheels, robotic arms, etc.) and that is why I
suggested earlier that the SETI at home approach would be a great use
for the computers over the next 10 years, if you could come up with
some heuristics that could say -- yes, this randomly thrown together
collection of atoms does in fact appear to be a molecular "part".
AI has one other very useful application which was probably what I
was thinking of when I made the statement -- to facilitate the
overlap of or merging of minds. It may be possible for the mind
of a "universal translator" to interpolate between two minds, as
translators currently do, but the problem with merging minds
is that you have to deal with *entire* minds and I don't think
a single mind can do that. Now, *maybe* you can partition the
problem into pieces and have dozens of minds handle the mapping
and layout (as you do with semiconductors or yachts now). But my
gut says that that will have a very non-holistic feel to it because of
the lack of simultaneous communication between the translator
minds about what each of them is translating. I think the only
way a overlapped/merged mind will be constructed is to have
a meta-mind (AI?) that encompases both minds and handles
simultaneous mappings/translations for all the inter-mind
concepts and channels. I suspect that if you ran the
uploaded minds at sub-real time and the multi-translator
minds at real-time, they could accomplish the equivalant
of a meta-AI-translator mind. But that is really a guess.
> Re: nanobot heirarchical control systems
> you might have a much simpler control system than is suggested by
> the sometime perception of ai.
Yes it is definately much simpler. Controlling an assembler arm
does not require a sophisticated ability to make tradeoff decisions
or even more complex forms of intelligence. Design however does.
> Re: hierarchy producing intelligence
> "complexity" of intelligence only to discover later that it is arises
> from not-so-awsomely-complex substructures in a multi-level hierarchy
> whose activities are coordinated by an elegant (in an appreciative sense
> "simple") somewhat-modifiable feedback system?
I suspect this is closer to the truth than many people would like to
admit. If you look at the work done with expert systems, it turns
out to be little more than a complex (very complex!) decision tree.
> How "i" does the control system *really* have to be?
Doesn't. All it has to do is meet the design parameters.
>
> The control system question is something I puzzle over.
> Where will it surprise us by being simple?
The control system for something as straight forward as a nanobot
network doing thermographic sensing or cell mapping in your body
is fairly simple. Probably not much more complex than the control
system on something like the Mars Surveyor (but much smaller).
> Where will it surprise us by being complex?
It will get complex when you want a real-time nanobot system that
can detect and correct things "bigger" than itself, e.g. the
onset of "negative" (undesired) thoughts.
It may also get complex in situations such as accidents, where
the nanobots would have to respond quickly, with potentially
incomplete information and a lack of coordinated "intelligence".
I could imagine your self-contained compu-net would have to make
decisions similar to those physicians have to make in triage
situations. For example, you are unconscious in the desert
following a freak crash of your aircar due to a nearby gamma
ray burst knocking out its computers. Do your nanobots launch
an all-out effort to construct a radio transmitter and use
all of their remaining power to call for help, or do they
settle in, slowly consuming your body for fuel, organ by
organ, keeping the essential functions going in the hope
that a search team will manage to locate you?
> I hope others will help out with this.
> Robert of course has an unfair advantage in that he gets to sneak a
> peek at Nanomedicine ahead of the rest of us. Luckily, he's on our team.
The next time I get a chapter like Chapter 8 (Navigation) to review,
I'll be sure to suggest that there are other people who would really
enjoy the honor of doing some of the review work... :-;
>
> By the way, speaking of nanocomputers and nonobot control systems: I
> spoke, briefly, to Eric Drexler and he mentioned that the speed of
> nanocomputers is slow relative to the speed of nanobots.
This is true, computers are very fast now, and will only get
somewhat faster (maybe 2-3 orders of magnitude). The difference
in motion times of something like an arm or leg on something of
"bacteria" size compared with "human" size ish *much* greater.
This is in part because nanobots can apply much greater power
densities at the strength limits of the materials. It may also
involve a reduction in the inertia of the manipulators
(though I'm going to have to think about this some more).
> That is compared for example to the speed of macro computers to the
> macro machines that they control. I asked him at Extro4 if this
> was "a problem" and he said it has to be dealt with, but that it can be.
Interesting comment. The computers described in Nanosystems
and Nanomedicine are fairly simple computers (4-bit or 386/486
equivalent systems if I recall in various places). Which of course
is fine if you have a limited number of things that each processor
must attend to. It is important to realize that nanocomputers
on nanobots are severely space and power constrained. So ultimately
the control systems (or the functions being done) are going to
have to be simple. Complexity will arise though. Whether
it is in "hive-type" cooperation and/or directive-issuing
"thought" centers that manage the underlings isn't clear at
this point.
I believe it is clear however, that nanobots operating
in vivo will not be a libertarian society. :-)
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:04 MST