From: Samantha Atkins (samantha@objectent.com)
Date: Fri Sep 29 2000 - 19:19:23 MDT
"Eliezer S. Yudkowsky" wrote:
>
> Samantha Atkins wrote:
> >
> > You are right that human goals are not uniformly friendly to human
> > beings. But I would tend to agree with the POV that an intelligence
> > build on or linking human intelligence and automating their interaction,
> > sharing of knowledge, finding each other and so on would be more likely
> > to at least majorly empathize with and understand human beings.
>
> Samantha, you have some ideas about empathy that are flat wrong. I really
> don't know how else to say it. You grew up on a human planet and you got some
> weird ideas.
Hey, so did you. What makes you think that your supposedly different
ideas about empathy are more correct when they have not even been tested
in the laboratory of real entities interacting?
>
> The *programmer* has to think like the AI to empathize with the AI. The
> converse isn't true. Different cognitive architectures.
>
There is no way to "think like the AI" as we in very large ways do not
yet know much about how to build such an AI much less how it will
think. You have some theories about this but there is nothing
preventing them from being quite far from how what you propose will
actually think.
> > Why would an Earthweb use only humans as intelligent components? The
> > web could have a lot of non-human agents and logic processors and other
> > specialized gear. Some decisions, particularly high speed ones and ones
> > requiring major logic crunching, might increasingly not be made
> > explicitly by human beings.
>
> Then either you have an independent superintelligence, or you have a process
> built from human components. No autonomic process, even one requiring "major
> logic crunching", qualifies as "intelligence" for these purposes. A thought
> requires a unified high-bandwidth brain in order to exist. You cannot have a
> thought spread across multiple brains, not if those brains are separated by
> the barriers of speech.
>
You have a process that includes human components for those things that
humans are better at than the increasingly autonomous and powerful
software agents and AIs. I disagree that some of these AI and agent
components do not qualify as intelligent. Not fully human level or not
self-conscious yes, but that is not essential to intelligence as a
general term. These non-human components of the process can have
unified minds and thought-trains. Just a bit more limited and not
self-conscious. Are you defining all thought as requiring
self-awareness?
If a group of humans cooperate on a major project do they have a unified
mind although all aspects of the problem are not literally in some group
overmind?
Do they need this unification or overmind for thought to be present?
> Remember, the default rule for "folk cognitive science" is that you see only
> thoughts and the interactions of thoughts. We don't have built-in perceptions
> for the neural source code, the sensory modalities, or the contents of the
> concept level. And if all you see of the thoughts is the verbal traceback,
> then you might think that the Earthweb was thinking. But three-quarters of
> the complexity of thought is in the underlying substrate, and that substrate
> can't emerge accidentally - it doesn't show up even in "major logic
> crunching".
>
Actually we do have some ability to introspect at the concept level and
have managed to ferret out some of how all of these levels work. We
can't see them directly at this time. But that can (and given enough
time will) change. I doubt the AI would choose to see these levels very
often or even necessarily initially have or give itself this ability.
Some of the low-level processes simply do not require the substantial
overhead imposed by self-awareness. The AI can examine these things in
more detail if so designed (or if it wishes to modify itself to do so).
Thinking does not require being self-aware within all these substrates.
Or do you wish to propose that humans cannot think?
> There is no "full confidence" here. Period.
>
> That said, the Earthweb can't reach superintelligence. If the Earthweb
> *could* reach superintelligence than I would seriously have a harder time
> visualizing the Earthweb-process than I would with seed AI. Just because the
> Earthweb has human components doesn't make the system behavior automatically
> understandable.
>
Agreed.
>
> > Frankly I don't know how you can with a straight face
> > say that a superintelligence is solid when all you have is a lot of
> > theory and your own basically good intentions and optimism. That isn't
> > very solid in the development world I inhabit.
>
> Wrong meaning of the term "solid", sorry for using something ambiguous. Not
> "solid" as in "easy to develop". (We're assuming that it's been developed and
> discussing the status afterwards.) "Solid" as in "internally stable".
> Superintelligence is one of the solid attractors for a planetary technological
> civilization. The other solid attractor is completely destroying all the life
> on the planet (if a single bacterium is left, it can evolve again, so that's
> not a stable state).
>
So, you are claiming that only those two are solid attractors. That is
a claim but it is not well support and I disagree. At the least I claim
that the SI is not inherently less likely to destroy us than is not
having an early SI.
> Now, is the Earthweb solid? In that million-year sense?
>
Your million year stability of the SI is a fantasy based on a lot of
unproven assumptions. Again, you claim it is utterly stable. But
without proof or at least much better reasoning/discussion the claim is
worthless and not a motivation for rushing to build the SI.
> > I hear you but I'm not quite ready to give up on the human race at as a
> > bad job
>
> Do you understand that your sentimentality, backed up with sufficient power,
> could easily kill you and could as easily kill off the human species?
>
So could assuming that the SI is our only hope and turning away from
what can be done even without an SI. So this also is not a real
argument.
>
>
> Try everything, I agree. I happen to think that the AI is the most important
> thing and the most likely to win. I'm not afraid to prioritize my eggs.
>
Fair enough. But I don't get the insistence that you are right and all
other options are really more likely to get us all killed.
> > So instead you think a small huddle of really bright people will solve
> > or make all the problems of the ages moot by creating in effect a
> > Super-being, or at least its Seed. How is this different from the old
> > "my nerds are smarter than your nerds and we will win" sort of
> > mentality?
>
> You're asking the wrong question. The question is whether we can win.
> Mentalities are ever-flexible and can be altered; if we need a particular
> mentality to win, that issue isn't entirely decoupled from strategy but it
> doesn't dictate strategy either.
>
Yes. I asked the worng question. I meant to ask why you believe such
an audacious and urgent goal is best served by having a small huddle of
brilliant people work on the goal in relative isolation (versus open
source, sharing of results, larger efforts, etc.)
> > By what right will you withhold behind close doors all the
> > tech leading up to your success that could be used in so many fruitful
> > ways on the outside? If you at least stayed open you would enable the
> > tech to be used on other attempts to make the coming transitions
> > smoother. What makes you think that you and your nerds are so good that
> > you will see everything and don't need the eyeballs of other nerds
> > outside your cabal at all? This, imho, is much more arrogant than
> > creating the Seed itself.
>
> The first reason I turned away from open source is that I started thinking
> about the ways that the interim development stages of seed AI could be
> misused. Not "misused", actually, so much as the miscellaneous economic
> fallout. If we had to live on this planet for fifty years I'd say to heck
> with it, humanity will adjust - but as it is, that whole scenario can be
> avoided, especially since I'm no longer sure an AI industry would help that
> much on building the nonindustrial AI.
>
The economic fallout will occur in any case. AI is being used in all
sorts of ways today that will lead to that inescapably. I doubt pieces
of the SI project will add much to that trend.
Do you think that you are qualified to decide for all of us what
technology should and should not be available? If so please acquaint
those of us who are understandably skeptical with your credentials. Do
the people who work with you on this goal get a say or is it a conditon
of being on the team that they let you decide these things?
AI industry is a different beast from open sourcing results. You will
not have a monopoly on AI by any stretch of the imagination. Opening
the work will allow oversight and input and the creation other
applications that might well be critical to both the viability of the
project and to our long term well-being.
> So far, there are few enough people who demonstrate that they have understood
> the pattern of "Coding a Transhuman AI". I have yet to witness a single
> person go on and extend the pattern to areas I have not yet covered. I'm
> sorry, and I dearly wish it was otherwise, but that's the way things are.
The pattern of the AI seed is not the largest problem. Reaching
agreement with the full range of your surrounding opinions and positions
is. Your statement of sorrow and implied grading of people are both
irrelevant and derailing of discussion. Anything that remotely comes
off like "those who can understand already do and the rest are hopeless"
is a position you probably should avoid even the appearance of holding.
Understanding what you propose and agreeing with you are quite different
things.
- samantha
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST