From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Fri Oct 01 2004 - 06:43:13 MDT
Nick Bostrom wrote:
> On a vaguely related note, I've had a half-written essay lying around
> since 2001 on how some "upward" evolutionary trajectories could lead to
> dystopian outcomes, which I've now finally completed:
> http://www.nickbostrom.com/fut/evolution.html.
Thanks for making that available Nick, it was both insightful and
well-written and covered a large part of what I was trying to convey in
substantially greater detail. I look forward to your presentation at that
the ExtroBritannia event next weekend; hopefully I will be able to attend.
Jeff Medina wrote:
>> "This is on top of the already known issues with qualia and the
>> illusion of free will; both are results of specific (adaptive)
>> flaws in human introspective capability
>
> Nothing of value is gained in engineering qualia out of the mind.
> The blind are not more evolutionary fit or competitive than the sighted,
> and neither are the color-blind in relation to those with color-vision.
This is incorrect. I characterise qualia and subjectivity in general as
flaws because they are both irrational (don't map onto a consistent
Bayesian prior) and based on lack of information (poor human
introspective capability). Rational utilitarians will always achieve
their goals more efficiently than irrational agents with ad-hoc decision
functions over the long run, and complete introspection is actually a
tremendously powerful ability (it's about a third of the basis for seed
AI and the Singularity in the first place). Restricting the design of an
intelligent agent to be based on what humans would characterise as
subjective experience will make it substantially, possibly massively,
less efficient than a normative design with the same computing resources.
> The only examples I can imagine that might lead someone to think
> qualia were a bad thing are ones which mention a negative result of a
> phenomenal experience, such as cognitive processing severely hampered
> by pain, loss of efficiency resulting from an inexplicable desire to
> stare at shiny objects, and the like.
That's lousy cognitive design /on top of/ the restrictions imposed by
subjectivity in the first place. My above statement holds for even
renormalised transhumans; the brain is a horribly computationally
inefficient and broken design (evolution is not your friend). I take
comfort in the fact that there is massive, possibly indefinite, scope
for improvement without removing subjectivity (even if it will never
be as efficient as normative reasoning).
> But this sort of example misappropriates responsibility to qualia
> for the unwanted result, when the problem is the cognitive
> association between a particular phenomenal experience and a
> particular negative behavior or loss of ability.
That statement is a Gordian knot of philosophical and cognitive
confusion. Briefly, you can't reengineer associations without changing
the qualia involved; the associations are a large part of the
definition. Nor are the consequences of such engineering reliably
constrained to forward propagation, at least not on the current
amalgamation of nasty hacks we call the human brain. That's not a
reason not to do it; it's just a reason to be absolutely sure what
you're doing before you start self-improving. Regardless messing about
with qualia isn't the same as removing qualia and no matter how much
you rearrange the latter will always be more efficient and effective.
Right now I don't know if merely taking qualia out of the primary
(but not secondary-introspective) reasoning path counts as 'removing';
it's hard to envision perfect rationalists at the best of times, never
mind attempting to verify if hybrid reasoners have subjectivity.
wrong approach.
> Aside: It remains to be seen whether it is even theoretically possible
> to remove phenomenal consciousness from a sufficiently intelligent,
> metacognitive being.
It's possible (if silly), but you're looking at this the wrong way.
perfect rationalists have /more/ introspective capability and are thus
more aware of what's going on their own minds than human-like reasoners
are. We're not talking about stripping out bits of reflectivity here;
normative reasoners simply don't have the complicated set of mental
blocks, biases and arbitrary priors that constitutes human subjectivity
(sensory and introspective qualia, the latter including a whole bundle
of crazy and often irrational but adaptive self-imaging hacks that
create the global sensation of a unified volitional self). EvPsych built
us (from inappropriate parts) to play social games, not solve logic
problems, and it made a hell of a mess in the process. However one of the
side-effects was something we consider uniquely valuable, the conscious
volitional self, so don't be too quick to strip that stuff out and
re-extrapolate yourself from Eliezer Yudkowsky's pocket calculator.
> Some form of panpsychism may yet be true, meaning qualia may be an
> intrinsic property of all existence, even though it's only recognizable
> by a certain class of mind-like, reflective structure.
I used to be that open minded once, but then I learned that philosophy
is just confused psychology and cosmology. Repeat after me; 'qualia are
not ontologically fundamental' (or rather not more ontologically
fundamental than any other class of causal process).
> Moving beyond "the illusion of free will" requires no engineering on
> the part of transhumans. Current humans who have realized that the
> folk concept of free will (which most closely resembles what is known
> in the philosophical literature as the libertarian view of the will)
> is a necessary impossibility (*) demonstrate all that is needed to
> overcome the illusion is rational reflection.
There is a huge difference between knowing declaratively that free will
is illusory and having that fact (or rather the absence of declarations
to the contrary) redundantly and pervasively hardwired into your
cognitive architecture. I don't know if there are alternative
interpretations of free will that make more physical sense but can still
play the same role in building a human-like subjectivity; I hope so, but
even if there are that would only solve part of the problem.
> because, roughly, if the universe is deterministic, we cannot have
> libertarian free will, our choices being determined by physical law
> and "beyond our control" (according to the folk concept), and if the
> universe is indeterministic, we cannot have libertarian free will,
> because undetermined choices are not determined by our goals,
> preferences, or will.
Congratulations. Now try thinking about global renormalisation of
probability amplitudes across timeless phase space and free will as
implicit in the shape of your self-reference class's distribution
across universes as derived from the root cause for anything existing
at all (yeah, I'm still working on that one too :).
> Now even assuming we could and did get rid of both qualia and the
> illusion of free will, would this really threaten our moral and legal
> foundations?
Legal systems are a red herring as they will be either nonexistent or
changed beyond recognition after a Singularity anyway, and legality is
functionally irrelevant to the desirability of post-Singularity
outcomes. Speculation on the legal consequences of transhumanism and
AI is an SL3- topic.
> And qualia? Would you lose your moral sensibilities if you could no
> longer hear or see? Neither would I.
Slow down. If you strip the qualia out of a human being, empathy no
longer works (abstract cognitive simulation of external agents is still
an option, given enough computing power, but the link between their
subjectivity and yours has been cut). This is a /huge/ issue for
practical morality and since almost all our more abstract morality is
derived from practical morality it's a major issue full stop. If
we're going to discriminate between goal-seeking agents on any kind of
qualitative grounds (if not, you have the same rights as a toaster with
the same processing power) then we need to define what it is about
the agent's cognitive architecture that makes them volitional and thus
morally valuable. For CFAI-era FAI we'd have to define it very, perhaps
impossibly specifically; CV may allow a working definition at startup
time.
> There is no reason to think sentient beings are the only objects that
> matter to other sentient beings, and the question dissolves on this
> realization.
No, it doesn't. Morality exists because goal-seeking agents can have
conflicting goals. It's purpose is to determine who's will takes priority
whenever a disagreement occurs. Physics provides a simple baseline;
survival of the fittest, whichever process has more relevant resources
wins. Morality imposes a different decision function through external
mechanisms or additional internal goals in order to achieve a more
globally desirable outcome. It's possible to construct morality just
from game theory, but humans generally want to see more human-specific
preference complexity in moral systems they have to live within. The
problem is that our preferences are defined against features of our
cognitive architecture that are locally adaptive but generally
irrational.
Peter McCluskey wrote:
> I'm having trouble figuring out why we should worry about this kind of
> problem. We have goal-systems which are sometimes inconsistent, and when
> we need to choose between conflicting goals, we figure out which goal is
> least important and discard it as an obsolete sub-goal.
We're not that rational; we're not expected-utility optimisers. Besides,
subjectivity isn't about goal conflict, it's about the global shape and
causal connectivity of our cognitive process. Removing subjectivity isn't
an explicit goal-system change; it's porting the goal system to a new
implementation architecture which doesn't have all of the implicit goals
and inconsistencies of the old one. I used to think it was possible to
abstract out all of our implicit goals and preserve them through such a
transition, but because subjectivity is irrational it can't be abstracted
to a consistent goal system that doesn't simply embed all the complexity
of the original process.
>> The CV question could be glibly summarised as 'is there a likely
incremental
>> self-improvement path from me to a paperclip optimiser?'.
>
> Or were you suggesting that we should be upset with the end result if
> fully informed people would decide to follow that path?
'Fully-informed' isn't as simple as it sounds. To be fully informed about
issues too complex for you to work out for yourself, something more
intelligent
than you has to inform you. If they're too complex for you even to
comprehend,
something more intelligent than you has to take your preferences and apply
them to generate an opinion on the complex issue. CV works by extrapolating
you forward to get the more intelligent entity that advises the earlier
versions. The problem is that the extrapolation might snuff out something
you would consider important if you knew about it in a way that the
non-extrapolated version of you can't recognise and the extrapolated version
of you doesn't report. My conclusion is that to avoid this the extrapolated
volitional entities in CV need to interact with the (Power-class) AGI running
the process as an effective 'neutral party' to detect these sort of
discontinuities, but that further complicates the problem of not letting the
knowledge that the CV process is a simulation affect the result.
Marc Geddes wrote;
> For instance the concept of 'wetness' is an emergent property which
> cannot be *explained* in terms of the individual behaviour of
> hydrogen and oxygen atoms.
This is correct in that you also need a definition of the human cognitive
architecture to explain the human sensation of wetness.
> The *explanation* of 'wetness' does NOT NEED to be *derived* from
> physics.
Yes, it does. If wetness wasn't physically implementable, it would not
exist.
> But a casual description of something is NOT the same thing as an actual
> *understanding* of that something.
A full causal description constitutes complete understanding of any
process, assuming it is defined all the way down to primitive physical/
mathematical operators, though such a description does not necessarily
include knowledge of relationships to other, similar processes possible
under those physics that we might include as part of our concept of
understanding the process.
The world will not end in a dramatic fight between seed AIs(1), with pithy
quotes and bullet time. If we fail there may be a brief moment of terror,
or more likely confusion, but that is all. I concede that pithy quotes
and bullet time are fun and sell novels; I hope that readers will also be
motivated to think about the real issues in the process.
(1) Unless of course we get a critical failure on our Friendliness roll,
in which case I expect to Tokyo getting levelled by giant tentacle monsters.
Randall Randall wrote:
>> It's not just the design, it's the debugging. Computers you can tile.
>> Of course there'll also be a lag between delivery of nanocomputers
>> and when an UFAI pops out. I merely point out the additional problem.
>
> One of my assumptions is that generic optimizers are difficult
> enough that some sort of genetic algorithm will be required to
> produce the first one. I realize we differ on this, since you
> believe you have a solution that doesn't require GA.
Generic optimisers are moderately hard to write. An inductive bias
comprehensive enough to allow takeoff is extremely hard to write for current
hardware and progressively less difficult to write as the available
hardware improves. You can use genetic algorithms to trade gigaflops for
programmer time and understanding, at a ratio determined by your
understanding of directed evolution theory. Anyone trying to build an AGI
on purpose will probably code the engine and a bit of the bias and use DE
to fill in the rest. Doing so will hopelessly break any Friendliness system
they have, which is probably already hopelessly broken given that they
think that using DE on an AGI is a good idea in the first place. But
anyway the net result is that silly amounts of computing power (i.e
nanocomputers) decrease the level of both programmer time and understanding
required to produce UFAI, eventually to the point where the world could be
wiped out at any moment by hopeful fools messing about with GAs in their
basement (or university offices).
> Since we appear to live in an STL world, I prefer MNT first.
FTL looks a lot more possible if you have the technology and resources to
create exotic configurations of matter, not to mention perform experiments
of near-arbitrary size and complexity and the intelligence to advance the
theory rapidly. Thus while FTL may or may not be possible, if it is a UFAI
is much more likely to develop it after you leave than you are at any point.
Eliezer Yudkosky wrote:
> If you have any hope of creating an FAI on board your fleeing vessel, the
> future of almost any UFAI that doesn't slip out of the universe entirely
> (and those might not present a danger in the first place) is more secure
> if it kills you than if it lets you flee.
Or even a UFAI; the paperclip optimiser does not want to compete with a
wingnut optimiser. Given the amount of computing power required to run a
nanotech spacecraft, it seems highly unlikely that the probability of the
escapees building one could get that low, so running is almost certainly
futile (a Power with the resources of at least the solar system /will/
find you and render you harmless, probably by the 'zap with an exawatt
graser' approach). Ha, amusing passtime for transhumans; compete against
your friends to design a seed AI that can turn a (lossily) simulated
universe into your favourite desktop implement before they can turn it
into theirs.
> Don't try this at home, it won't work even if you do everything right.
(Don't try this at your neighbourhood AGI project either.)
> The problem word is "objective". There's a very deep problem here, a
> place where the mind processes the world in such a way as to create the
> appearance of an impossible question.
We're roughly equivalent to a seed AI trying to fix a bug in its source code
that it has been programmed (by evolution in our case) not to see. There is
clearly an inconsistency here, but we can't see it directly.
Jeff Albright wrote:
> We can influence the direction, but not the destination of our path
> according to our "moral" choices. We can do this by applying increasing
> awareness of ourselves, our environment, and the principles that
> describe growth, but it's open-ended, not something amenable to
> extrapolation and control.
The behavioural recommendation is a meaningless generalism; the conclusion
is simply wrong. The absolute tractability of extrapolation is unknown but
dependent on simplifying assumptions, which may be enforced by a Power with
the means to do so. The enforcement of simplifying assumptions such as
'all sentient life is wiped out' is an example of 'control' that we'd like
to avoid. The 'let's just create a Singularity and see what happens'
sentiment is deprecated; those people likely to influence the fate of
humanity must create a consensus preference function for possible futures
and then a tractable means to positively verify our choices.
> The next step in the journey will indeed involve the increasing
> awareness of the multi-vectored "collective volition" of humanity, but
> in the context of what works, focusing on effective principles to
> actualize our collective vision and drive toward an inherently
> unknowable future.
I agree with this part in so much as I think that Eliezer's model of CV as
a black box the programmers don't mess about with (for ethical reasons) is
likely to be unworkable; I suspect the AI running the process will need a
lot of trial runs and programmer feedback which will unavoidably entail a
peak at the spectrum of possible futures (though certainty thresholds for
implementation should all be bootstrappable out of CV itself).
Sebastian Hagen wrote:
> The best answer I can give is 'whatever has objective moral relevance'.
> Unfortunately I don't know what exactly qualifies for that, so currently
> the active subgoal is to get more intelligence applied to the task of
> finding out.
Currently the only thing I can imagine you mean is working out all the
possible ways that the physics we're embedded in (or possibly, all logically
consistent physics) could have produced intelligent agents and generalising
across their goal systems. The only preference function for realities built
into physics is their actual probability amplitude; expecting there to be a
notion of desire for goal-seeking agents built into the structure of the
universe itself is simply a layer confusion (excepting the possibility that
our universe was designed by an intelligent agent; we could be someone else's
simulation or alpha-point computing experiment, but that just backs up the
problem to a wider scope).
> Should there be in fact nothing with objective moral relevance, what I do
> is per definition morally irrelevant, so I don't have to consider this
> possibility in calculating the expected utility of my actions.
This class of goal-seeking agent (the class you would be if you actually
meant that) will probably be considered a time bomb by normative reasoners.
Regardless, all the people I know of who are actually developing AGI and
FAI do have subjective morality built into their goal systems, so it's not
a terribly relevant statement.
> Considering cosmic timescales it seems highly unlikely that the two
> civilizations would reach superintelligence at roughly the same time...
> since one of them likely has a lot more time to establish an
> infrastructure and perform research before the SI-complexes encounter
> each other, lack of efficiency caused by preferring eudaemonic agents
> may well be completely irrelevant to the outcome.
As you point out the problem isn't competition between alien Powers, it's
competition between agents within a single civilisation (2). Even with a
Sysop that prevents sentients doing nasty things to each other, there will
still be meaningful forms of competition unless the general intelligences
with subjectivity are isolated from those that lack it (an upload/flesher
divide could be seen as a very primitive version of such a partitioning).
(2) Assuming we avoid the 'all souls sucked up into a swirly red sphere in
low earth orbit' critical failure scenario. :)
> "All-Work-And-No-Fun"-mind may well be among the first things I'd do
> after uploading, so some of my possible future selves would likely be
> penalized by an implementation of your suggestions. My opinion on those
> suggestions is therefore probably biased...
>
> I don't think that what I would (intuitively) call 'consciousness' is
> by definition eudaemonic, but since I don't have any clear ideas about
> the concept that's a moot point.
Argh. Firstly, intuition is worse than useless for dealing with these
concepts. Secondly, you shouldn't even be considering self-modification
if you don't have a clue about the basic issues involved. Without that
you might as well adopt the principle of indifference regarding what you
might do, possibly modified by the opinion of people who have put serious
thought into it. "All-Work-And-No-Fun" can be interpreted two ways; an
Egan-style outlook that merely makes leisure undesirable (incidentally
what else do you care about so much that you want to use your mind to
achieve?) and actually removing subjectivity. The former is a goal system
change; the later is a reasoning architecture change (as opposed to a
simple substrate change that doesn't affect the agent's decision
function). You apparent cheerful willingness to turn yourself into a
paperclip optimiser, dropping us to game-theoretic morality that
ultimately fills the universe with selfish replicators, is /exactly/ what
I was concerned about. Without CV it's a justification for the desirability
of Bostrom's singleton (Sysop); with CV it's a reason to make sure that
you can't do this (in the CV simulation and reality) without everyone who
might be affected understanding and approving of the consequences of allowing
non-eudaemonic reasonsers to assert a fundamental claim resources.
> Why? I don't understand why the activities mentioned before the quoted part
> ("humor, love, game-playing, art, sex, dancing, social conversation,
> philosophy, literature, scientific discovery, food and drink, friendship,
> parenting, sport") are relevant for the value of human life.
What else does human life (as distinguished from say a theorem proving
program running on your PC) consist of?
> From the perspective of an agent, that is striving to be non-eudaemonic,
> (me) the proposed implementation looks like something that could destroy a
> lot of efficiency at problem-solving.
>From the perspective of the majority of humanity, why should we care? Being
less efficient just means that things take longer. Assuming we've dealt with
the competition problem, what's the hurry?
> Why do we have the right to declare absence of moral significance without
> even being capable of understanding either the society described, or
> understanding objective morality (if there is anything like that)? The
> society may be decidedly unhuman, but this alone is imho not any
> justification of declaring it morally insignificant.
Singularity strategy is not something where political correctness is relevant
(or politics in general; several Singularitarians including myself have
already had our former libertarian ideals gutted by the realities of
posthuman
existence and arguably any remaining long-term libertarians just don't
understand the issues yet). These possible futures are in competition; we're
already trying to drag the probability distribution away from extinction
scenarios and to do that we also need to decide where we want to drag it
to. Allowing the current will of humanity to shape the probability
distribution
of possible racial futures is just as important as allowing your current
volition
to shape the probability distribution of your possible future selves.
Aleksei Riikonen wrote:
> So let's strive to build something FAIish and find out ;)
That's the other siren song, at least for us implementers; the desire to
try it and see. However impatience is no excuse for unnecessarily
endangering extant billions and subjunctive quadrillions of lives;
Goertzel-style (attempted) experimental recklessness remains unforgivable.
* Michael Wilson
http://www.sl4.org/bin/wiki.pl?Starglider
___________________________________________________________ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT