Re: "Cybernetic Totalism?"

From: Bryan Moss (bryan.moss@btinternet.com)
Date: Thu Oct 12 2000 - 11:21:31 MDT


Anders Sandberg wrote:

> http://www.edge.org/3rd_culture/lanier/lanier_index.html
>
> I found some of the points rather tonic, but the overall
> impression wasn't that high.
>
> What he is aiming at is disproving or undermining the
> *unconsidered* ideology of technological eschatology. To
> some extent I agree with him - there is an awful amount of
> badly thought out fluff on this [...]

This is my interpretation of his argument:

    Autonomy has no technological benefit. That is, we
cannot *use* autonomous devices. But the approaches we are
currently taking towards the creation of autonomous devices
are technological. For example, *evolving* a brain is not
science--it provides no understanding--and, if the goal is
human equivalency, it's of no technological benefit either.
(Imagine an automobile manufacturer that has not only fully
automated their production and design process but only
produces passenger-less self-driving vehicles.) Given that
AI has no apparent merit either as science or technology
there must be another reason for adopting it as a goal, and
that reason is the quasi-religious "Cybernetic Totalism".

In many ways I think he's right. Moravec and de Garis seem
to see themselves as Agents of Evolution; but the
inevitability of "mind children" is, in Lanier's terms, an
"intellectual mistake". These are, after all, *mind*
children. Of course, if we're going to have children they
might as well be mind children; in essence it is that our
children--biological or otherwise--are *not* inevitable that
defines our current situation. We will soon be able to
choose what we preserve of ourselves, rather than accepting
our genes as our lineage. I think Lanier's mistake, like so
many critics of technology, is the failure to recognise that
technology does not create new problems it merely magnifies
existing ones. In the case of AI it's that old favourite
"what are we and what are we doing here?" You can't
question the purpose of fully autonomous systems without
also questioning the purpose of our own society.

The other area Lanier attacks is AI as interface. I agree
with him wholeheartedly here, 'intelligent software'
isn't as smart an idea as it sounds. Unfortunately current
measures of interface efficiency give absolute efficiency to
an automated process; as Lanier points out, however,
automating a process isn't always in the best interest of
the user. I am unaware of a way to measure the
effectiveness of automation. I'm also dubious of the merits
of 'agents' in user interfaces (it would be simple to test
the idea that another human presents the best interface,
just get a skilled computer operator to act as the agent).
The point of Lanier's (and my) digression is that interface
is another often-used justification for the goal of AI that
is also questionable. I also agree with Lanier's argument
that the promise of AI causes today's lacklustre software.
(It's not just a matter of annoying paperclips either;
people are sequential and difficult to navigate, computers
are increasingly adopting these traits at the expense of
exploiting the spatial skills of their users. Design
principles are routinely ignored because programmers
anthropomorphise their programs. The slow uptake of object
oriented code and distributed processing may also have its
roots in the anthropomorphic beginnings of computing,
although complexity is probably the main culprit.)

It may also be that Lanier is using AI's questionable
application as a user interface to challenge the idea that
AI could become integral to society rather than simply be
used to automate facets of society into a kind of
disconnectedness (as with my example of the automobile
manufacturer). If we want AI to form a part of society and
do not simply accept AI as our mind children and "hand over
the reigns" we have to find a niche in society that involves
interaction rather than automated isolation. By questioning
this niche Lanier adds merit to his argument.

> [...] we should see how we can polish up transhumanist
> thinking in order not to fall into the traps he describes.

I think Lanier makes some good points that are difficult to
find in what is essentially a very confused essay. The main
thing we should take away from this is the questionable
nature of AI as a goal, not because it is necessarily a bad
goal but because, for me, it illuminates a bigger problem.
After all, what is society but a fully autonomous system?
And what external purpose does that system serve? For me
Lanier's essay was an affirmation of my own doubts about
transhumanism. Without a purpose we cannot architect our
future, we need to discover the precise things we wish to
preserve about ourselves and our society and only then can
we go forward. In my mind it is not enough to say "I want
to live forever"; "I" is simply shorthand, I want to know
what it is about me that I should preserve and why I should
preserve it. I think these problems run deep enough that
we'll need more than polish.

BM



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:35 MST