From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Jun 25 2002 - 08:46:09 MDT
On Tue, 25 Jun 2002, Christofer Bullsmith wrote:
> A question for the 'extropians'. (Apologies if this discussion has been had
> -- please direct me to it).
The archives are probably hip deep with them...
One starting point is Moravec's "Bush-tree" process for neural replacement
in "Mind Children". One approach is a "tightly" coupled mind-machine interface
enabled through synaptic nanobots, a whole-brain fiber optic net and a data
port (ala The Matrix). Kurzweil has indicated that he thinks we can do
the link over wireless but I'm skeptical. Keeping the power requirements
within the heat dissipation capacity of the brain will be challenging.
As more and more of your mind "grows" out-of-brain, the survival of the
brain becomes increasingly less important. I generally refer to this
as "outloading" or "offloading" to differentiate it from forms of "uploading"
like recreating your information content from fine resolution scans performed
on a cryonically preserved brain (a concept that tends to make Damien crazy --
"because its not 'me'").
> And so we have found immortality.
Not so fast there. As I pointed out at Extro-3, you still have the problem
of your local hazard function. Until you are a distributed replicated
intelligence, you can still die from "local" hazards (things like gamma-ray
bursts if you are uploaded into an Matrioshka Brain). You have the ultimate
problem of the hazard function of the Universe (the protons may decay eventually).
It isn't clear if the Dyson/Freese-Kinney approaches give us an escape
allowing us to sneak past that sticky problem.
The best we can predict with relative certainty is that if you can trump
your local hazard function (the larger distances you distribute your evolved
"mind" over the lower your hazard function likely becomes) then you probably
get trillions of years of existance.
> Now, I think these words are being used in different ways in this forum, but
> bear with me. Humans can be emulated (their input-output pattern reproduced,
> say by a digital computer), or simulated (modelled beyond the input-output
> level, perhaps given a digestion and so on, to whatever level and for
> whatever purpose one has in mind).
Actually, it depends on the level of the simulation. A molecular dynamics
atomic level simulation of the human brain can probably not be simulated
by even a Matrioshka Brain in "real time", though "real time" becomes
a much more nebulous concept once uploaded because you can vary the
clock rate/time slice.
> Neither being emulated (by my friend's kid, nor by a roll of toilet paper
> and a diligent clerk) nor simulated (by a media double or an android)
> constitutes duplication of 'me'.
If its an exact information duplicate of you, it *is* you. That is why
copies have to remain in stasis (or be syncronized to each other in
real time). As soon as the information content diverges it ceases to
be you.
> So far as I can see, uploading is no route to immortality. I might play a
> crucial causal role in the creation of a machine intelligence, but I die
> anyway.
No route to immortality for the reasons cited above. But either approach
I've described does seem to provide a fairly radical extension to the
continuity of ones "identity" beyond the limits currently believed by
most of humanity.
> Now, I've been trying to figure out why you all feel differently. Maybe if
> the upload scanning is invasive and results in destruction of the body, we
> intuitively fasten onto the emulation as the 'best candidate survivor', in
> the sense of 'I am survived by Hal, the new Me'. In some brain damage or
> memory loss cases, where questions of 'same person/different person?' become
> difficult to answer, it might be appropriate to use the same kind of
> language.
I like to think of it as the system reboot your brain goes through when
you wake up each day. A more severe case (e.g. when the copying process
isn't perfect) might be more like recovering from amnesia or a coma.
Some people seem to have their "identity" much more wrapped up in their
body image, the face they see in the mirror in the morning, etc. (though
of course these things could be simulated for an uploaded mind).
> But this intuition can be bent any which way. An upgrade of technology to
> nondestructive scanning may leave you alive and well, watching a machine
> intelligence launch itself ('yourself?') into space. Even the information
> obtained by destructive scanning could presumably be used more than once --
> which resulting machine intelligence is 'you'?
They all start out as "you". A better analogy might be "faxing" rather than
xeroxing. The copies are all a little different from the original as soon
as they start executing. The longer they run the more they are likely to
diverge.
> So far as I can see, you've
> died, though emulations of you survive you. Intuitively, linguisitically,
> legally, philosophically, and biologically a much easier position.
In the outloading process, you have never "died" per se because there
has never been any "full stop" of the brain's internal thought processes.
There is just a gradual evolution of the mind you were into something
with much greater capabilities. You may consider yourself to have "died"
when your body ceases to function (or gets offed in some extreme sporting
accident) but your highly evolved mind should easily survive this event.
(Consider it to be along the lines of losing a tooth to your current self.)
> What do you expect? A splitting of consciousness (what would this be)? The
> beginning of a new one (in this case, you die and your child goes to the
> stars)?
Well, multiple personality disorders seem to suggest the wetware can
support multiple "selves". I vote to send all of my good parts of
myself to the stars and leave all my bad parts behind on Earth to
get crisped when the sun becomes a red giant. The best way to think
of it, I think, is self-directed self-evolution. Think of it like
continually upgrading your computer. After a while none of the
hardware is the same and much of the software is different.
But you still use it to do many of the things you have always done.
> I feel a certain academic interest in uploading myself, but would
> deem destructive scanning a form of suicide, and the process otherwise to
> have no impact on my life expectancy. (Medicinal, biological, mineral, or
> computational *enhancements* I feel a much more personal interest in,
> though.)
Read the Freitas-Phoenix Vasculoid paper or Nanomedicine Volume I if you
are really ambitious. Then imagine similar capabilities applied to
brain resident nanohardware. The outloading process I describe above
follows naturally from that.
As long as there is a continuity of self identity then differentiating between
non-destructive and destructive uploading is attempting to split a mighty
fine hair.
> As an aside, my worries are I think made worse by the easy talk of programs
> and implementation and moving humans (in the process becoming post-) to new
> architectures, etc. For a start, I'm not my brain, for all that my point of
> view sits in about the same place. My memory uses my muscles, I can't
> remember my own phone number without my right hand and a keypad, my brain is
> privileged but not the whole story by any means.
But its the major player -- nobody suggests that some adaptation will not
be required following an upload. Think of it like some of the neural remapping
that has to be done following a cochlear implant, or some of the new eye
replacements, spine-to-computer-to-spine "jumpers" to deal with severed spinal
cords, etc.
> The hardware
> embodies the program -- it's an embarrassment for the old Minsky crowd that
> even a digital computer (of the physically possible variety) has to get so
> low-down and dirty as to actually simulate at the node level rather than
> just running a few lines to emulate the input-output behaviour.
Actually not. If William Calvin's proposals for how "ideas" are stored
and intermingle in the brain are correct, then it easily explains why
existing computer models haven't produced creative "intelligence".
Deep Blue is an example of getting the input-output very similar using
an entirely different model to generate the "intelligent" behavior.
To get machine "intelligence" is going to take some derivative
of genetic programming algorithms and methods if Calvin's
hypothesis is correct.
> In the absence of a hardware/software divide, pulling out just the software
> sounds difficult;
Its the neural interconnections and strengths of the synaptic connections.
Nanobots at the synapses or 5-10 nm resolution frozen slice scanning should
be able to collect this information. What is going to be difficult is
developing high-bandwidth inter-upload communications so when your network
thinks "blue" it maps onto something approximately equivalent in my network.
> and once you're talking not about a functionally identical implementation
> of the software (as one can with software) but rather a
> 'functionally-similiar-in-respect-of-x-and-y' emulation
> (the best one can do, I'm afraid),
But that's what we are already! Each letter I type on the keyboard
is creating a new "me". I a different me from who I was yesterday
and he was different from who I was the day before. So long as
you have a continuity of identity I don't think it matters.
> talking about a physical system rather than just the information
> processing capability of the system, uploading is starting to
> sound downright unattractive.
For those who are "unenlightened" and whose self-view contains
significant current physical instantion "attachment" you may
be right.
Perhaps we should pass a law that uploading is only allowed
for those who can answer the question "What is the sound of
one hand clapping?" (its an old Zen koan).
> Besides -- me, I'm a physical system, not an information processing
> capability.
Rasberries. You *think* you're a physical system -- where do
you think that thought came from?
The Extro-List -- where half baked ideas get put through the meat
grinder and turned into vegiburgers. (no offense intended... :-))
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:00 MST