From: Richard Steven Hack (richardhack@pcmagic.net)
Date: Sat Mar 09 2002 - 18:51:22 MST
At 12:21 PM 3/9/02 -0800, you wrote:
>On Sat, 9 Mar 2002, Richard Steven Hack wrote:
>
> > Once again, we're assuming the speed of light is an absolute limit.
>
>It would seem useful to believe for most purposes of discussion
>that the current physical laws remain in effect for the future
>of the universe.
Your purpose of discussion perhaps - my purpose may vary...
> If they don't then the universe becomes very
>unpredictable -- it might be a much "nicer" place, or a much
>more difficult environment in which to survive. There is simply
>know way of knowing.
Correct - my point, I believe.
> If one wants to start a thread stating
>some arbitrary collection of physical laws, e.g. FTL travel, wormholes,
>etc. then postulate how uploads should behave in such an environment
>then feel free to do so. (I'll note the the new Star Trek prehistory
>has included some of those elements so it isn't completely boring.)
Doesn't interest me - I was merely responding to assumptions by other parties.
> > That may be true given current scientific knowledge, but that is not
> > the same as being an absolute.
> >
>For the purposes of list discussion, I think its reasonable
>to use something like Michael Shermer's skeptic scale. On that scale
>I'd put the probability of nanotechnology based uploading in my natural
>lifetime at about a 3 and for the younger list members at 1.5-2.
>I'd then put FTL travel in the next century at an 8 and wormholes
>of any significant use anytime in the future at a 9.
>
>So for "planning" purposes it seems reasonable to assume that you
>do hit some physical limits at optimizing the locally available
>matter sometime over the next thousand years or so.
Given that I do not know where I will be as a Transhuman in a thousand
years, and given the likelihood that my brain at that point will be quite
capable of doing such an analysis in much shorter time than I have to deal
with this now, I see no reason to worry about physical limits at this
time, thank you.
> > A Transhuman has no need to replicate.
>
>Only if you can absolutely guarantee that you trump the galactic hazard
>function. Accidents happen to transhumans they just take longer.
>As I discussed at Extro3, only a distributed replicated intelligence
>has a good chance of surviving indefintely. Today, given my perspective
>on how much more expanded I expect minds to become, I might argue that
>a distributed mind with redundant intelligence support systems
>might be more accurate.
Ah, yes, Robert Ettinger's hypothesis, I assume. If you exist at a single
point in space/time, you can't be immortal; if you are distributed, you can be.
I'm not convinced. Worse, I have a metaphysical problem with the notion of
"distribution" - that I do not know of any technology to accomplish it AND
preserve the identity and continuity of the entity in question.
> > Now, it is possible that it may
> > turn out that replication is a good move for some other reason, but
> > replication is not identity (barring some tech that enables it to be
> > identity), therefore replication offers no survival advantage to an entity
> > bounded by space-time.
>
>If one can design and build a point-source singularity weapon
>(Shermer Skeptic Scale: 5) then it may be very difficult to detect one zooming
>in from the Oort cloud at high velocity. You may not have the advanced
>notice or the time to do a backup. So if you "confine" yourself to
>a single dense location, you still retain a non-zero hazard function.
>If, on the other hand, you distribute yourself over the solar system
>its difficult to imagine what could cause a total system failure, so
>the hazard function is much lower.
As noted above, the concept of "distribution" seems to be hand-waving since
we know of no technology (at this point) to do it and as noted above
preserve the identity and continuity of the entity involved. If you can do
it, of course, then you would be correct and the approach would be
attractive to me. "Uploading" as it has been described to me does not meet
my criteria.
> *But*, you do think much more
>slowly due to the propagation delays. If there is an "economy"
>(in a traditional sense -- where the entities "value" something
>that isn't universally available and will pay matter or energy
>to get it), then the most distributed entities with the lowest
>hazard functions are perhaps also the "poorest" (because they
>probably produce the least quantity of novel information).
Ah, I may miss the point. Why would the distributed entities produce the
least novel info? Mike Lorrey seems to believe the opposite - that you
distribute yourself to increase your rate of experience. You seem to be
saying the opposite.
> > As I say above, replication is not the issue. It is not clear that
> > Transhumans need be competitive - that is *human* thinking (and low-grade
> > human thinking at that).
>
>Initially, clearly no. But if you allow yourself to make copies
>that have autonomy or you allow minds (natural or artificial) to
>self-evolve to a state where they decide to make copies and you
>don't impose a solar system wide ban on copying (which may be quite
>difficult to enforce given the feasibility of designing a self-evolving
>seed without copying constraints) then you have a limited window
>before you are once again up against the limits. Not only do you
>have to outlaw unlimited copying, you have to outlaw unlimited
>mind expansion as well. Can't have 1 mind getting 99% of the
>available matter & energy and the rest getting 1% can we?
But I still see no evidence - other than your notion of distribution to
avoid space/time catastrophe - that replication would be considered
desirable by a posthuman entity. The "society" notion advanced by others
- that posthumans would replicate because this would increase the utility
of all - may or may not be true. We don't know what Transhuman
intelligence will be like - they may desire or need society; they may not.
On the other hand, it could well be that Darwinian competition continues IF
in fact there are hard limits and the entities come up against them in a
reasonably short time (by their lights). If that is the case, then the
Highlander motto, "There can be only One", may well prove true. Perhaps we
haven't been contacted by higher intelligence because HE doesn't give a
damn... In that case, Doctor Doom starts to look like a better role model
all the time...
>Fortunately our observational abilities may have increased to the
>point where we might be able to discover how others may have solved
>the problem and have some working examples of good strategies
>to use ourselves. At least I hope that will be the case.
>
>Robert
Agreed.
Richard Steven Hack
richardhack@pcmagic.net
--- Outgoing e-mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.332 / Virus Database: 186 - Release Date: 3/6/02
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:53 MST