From: Jeff Bone (jbone@jump.net)
Date: Sat Dec 08 2001 - 14:15:34 MST
"Eliezer S. Yudkowsky" wrote:
> > Bottom line, in the limit: you cannot. Extinction of the "individual"
> > --- even a distributed, omnipotent ubermind --- is 100% certain at some
> > future point, if for no other reason than the entropic progress of the
> > universe.
>
> Don't you think we're too young, as a species, to be making judgements
> about that?
No --- that's the difference between faith and science. It's not a judgement,
it's a prediction from a model that generates predictions consistent with
observation and measurement. However, I *did* get a bit sloppy in my framing;
the assumptions which make the above statement empirically true are: open
universe, single universe (at least with respect to our ability to be
somewhere), mostly-constant physics across spacetime, inability to do
engineering with spacetime or fine-tune certain things, etc. Entropy seems to
be very fundamentally tied up in how spacetime works. While there's a
considerable amount of refinement to be done in the various theories, all signs
point to a unified theory of quantum gravity that preserves 2LT. Also,
best-guess measurement of the cosmological constant seems to indicate that we
have an open universe, which given all the rest yields heat death as an
eventual certainty. If that's wrong, then we have a closed universe and a
whole new set of problems and opportunities.
> I do think there's a possibility that, in the long run, the
> probability of extinction approaches unity - for both individual and
> species, albeit with different time constants. I think this simply
> because forever is such a very, very long time. I don't think that what
> zaps us will be the second law of thermodynamics, the heat death of the
> Universe, the Big Crunch, et cetera, because these all seem like the kind
> of dooms that keep getting revised every time our model of the laws of
> physics changes.
While the Big Crunch is a pretty specific model of a physical phenomenon, 2LT
seems much deeper and more abstract than that --- kind of like Godel, Turing,
etc. It's that fundamental and deep a concept --- and the greatest long-term
risk we can predict, given certain assumptions and certain things that are
observably true about our universe.
> It seems pretty likely to me that we can outlast 10^31
> years. Living so long that it has to be expressed in Knuth notation is a
> separate issue. Our current universe may let us get away with big
> exponents, but it just doesn't seem to be really suited to doing things
> that require Knuth notation.
I agree. We'll have to build another one, if that's possible.
> I don't think the laws of physics have settled down yet.
IMO, there's still refinement to do, but we're starting to converge on
something that's a reasonably accurate (yields predictions consistent with
observed reality) and yet general (works at all scales in all contexts) set of
base laws. Note too that these won't be "the" laws of physics, rather "a" laws
of physics. We can never prove (epistemological impossibility) that any given
set of laws, no matter how accurate the predictions they yield, are the best or
even only model of how the world works. (Neither can we prove the converse,
except through scientific method: i.e. creating alternative models that yield
better predictions.)
> I admit of the
> possibility that the limits which appear under current physical law are
> absolute, even though most of them have highly speculative and
> controversial workarounds scattered through the physics literature. I'm
> not trying to avoid confronting the possibility; I'm just saying that the
> real probability we need to worry about is probably less than 30%, and
> that the foregoing statement could easily wind up looking completely
> ridiculous ("How could any sane being assign that a probability of more
> than 1%?") in a few years.
Interesting. How did you get 30%?
> And here, of course, is where the real disagreement lies.
>
> Under what circumstances is a Sysop Scenario necessary and desirable? It
> is not necessary to protect individuals from the environment,
[disagree, but checkpoint]
> ...there is still the possibility of the
> violation of sentient rights;
Here is the actual heart of the disagreement. I've been struggling for the
last several years to put together a consistent and generally useful system of
"axiomatic ethics / morals" such as could be used by a hypothetical perfectly
rational intelligence, or even irrational intelligences that can act as
impartial arbiters. It's been an abysmal failure, and my best "guess" (it's
not yet rigorous enough to call it a hypothesis) is that the failure results
from the concept of "rights."
IMO, any system of "rights" in practice actually results in unresolvable
inconsistencies and paradoxes. It seems to me that only a system of
negotiated, consensual "rights" results in a workable, consistent system that
optimally balances competing self-interests among the participants --- and the
inevitable sacrifice involved is the imaginary security that we all cling to in
our notions of civilization. The basic problem with the notion of rights and
of universally-applicable legal systems (axiomatized expressions of the checks
and balances of rights) is --- my guess --- a kind of incompleteness principle,
with the same tradeoff of completeness and consistency.
"Rights" are a very anthropomorphic, spooky manifestation of a kind of
"faith." They aren't subject to any kind of empirical testing at all ---
indeed, experimental enquiry suggests that they don't exist. (A person's
"right" to life doesn't keep another from violating that "right" and not
suffering any consequences.)
It may be that the optimal system for allowing independent actors to achieve
optimal balance of competing self-interests is not a system of axiomatized
rights coupled with protective and punitive measures (a "legal" system, or a
Sysop) but rather a kind of metalegal framework that enables efficient
negotiation and exchange of consensual, contractual agreements. (If I ever
manage to get this whole thing whipped into shape, the ongoing book project is
entitled "Axiomatic Ethics and Moral Machines." :-)
> I think that ruling out the possibility of an unimaginable number of
> sentient entities being deprived of citizenship rights, and/or the
> possibility of species extinction due to inter-entity warfare, would both
> be sufficient cause for intelligent substrate, if intelligent substrate
> were the best means to accomplish those ends.
I would mostly agree, modulo concern over what is meant by "rights" --- but I'm
not sure that it's safe (even given the notion of "Friendly") to assume either
> Whether we are all DOOMED in the long run seems to me like an orthogonal
> issue.
It isn't really orthogonal; it's possible that the choices we make now --- the
"angle of attack" with which we enter the Singularity --- may prune the
eventual possibility tree for us in undesirable ways. I don't think this is a
reason to futilely attempt to avoid Singularity, I just think it should give us
pause to consider outcomes.
Example: let's assume for a moment that the universe is closed, not open.
Let's further assume that Tipler's wild scenario --- perfectly harnassing the
energy of a collapsing universe and using that to control the manner of the
collapse, allowing an oscillating state and producing exponential "substrate"
--- is plausible. Then the best course of action for a civilization would be
to maximize both propagation and engineering capability to do so when the time
comes. This very long-term goal supporting maximum longevity of the civ's
interests may in fact be in conflict with the concept of individual rights and
volition. Hence, putting in place a Power that favors one may prevent the
other. That's a tradeoff that needs to be considered in any scenario of
ascendancy.
> Eliminate negative events if possible.
But "negative" has many dimensions, and most of those are subjective...
jb
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT