Re: How can we deal with risks?

From: Holger Wagner (Holger.Wagner@lrz-muenchen.de)
Date: Sat Nov 01 1997 - 16:39:17 MST


> Holger Wagner <Holger.Wagner@lrz.uni-muenchen.de> writes(that's me :-)):
>
> > 1) Today, humans are by no means perfect. They have a certain idea of
> > what they do and what the consequences are, but it happens quite often
> > that something "impredictable" happens. If you apply changes to the
> > ecologic system, can you really predict what consequences this will have
> > in the long run - and would you take the responsibility?

Anders Sandberg <asa@nada.kth.se> replied:
> We cannot predict the long-term consequences of our
> actions. Period.

Is that a proven fact? I mean: will it NEVER be possible, even with
superintelligence?

Maybe short-term predictions will be sufficient as we gain more control
over nature. What about simulating ecologic, economic, social etc.
systems? Today, we may not have enough computing power (and probably not
the right theories and algorithms, either), but wouldn't this be a very
valuable field for research?
I came across this when I learned about different theories and models
(please correct me if those terms don't make any sense to you) for mass
communication. There were simple models that were easy to understand but
that could only be applied to simple problems, and there were
complicated models which were hard to understand but which could explain
a lot more. I believe one could think of models that don't make sense to
humans because they can't deal with them in their brains - but computers
can.

My idea was to give the computer a program that can deal with as much
data as available, start a simulation and see how similar the results in
the simulation are to those in reality. That'll give you a basis to
improve.

Since we're probably soon (or already) gonna deal with technology that
*might* mess up great parts of our environment, simulations for certain
steps (including worst-case scenarios) could trouble (trouble sounds
like quite a bizarre word in that context). One thing I have in mind are
genetically manipulated insects, or even worse - bacteria, which
accidentally (or intentionally) come into the environment. This could
cause (currently) uncontrollable chain-reactions.

> > Possible solution: I assume that most scientists are very intelligent,
>
> As a scientist, I'm flattered but unfortunately this isn't very true
> (in fact it is a remain of the old romatic idea that artists and
> scientists are set apart; in reality scientists are fairly ordinary
> people in general).

I guess I expressed myself pretty humbly...
I didn't mean to generalize it like that.

[...]
> > 2) Today, humans are by no means perfect. While I trust scientists to
> > have at least a vague idea of what they're doing, I do not trust people
> > in general.
>
> Thanks again, but I think you should nuance your position a bit
> more.

You're right - you misunderstood me.

> I think one can trust all people (with a few exceptions) to some
> extent; being a scientist doesn't per se make you more trustworthy,
> just as being a government official doesn't per se make you less
> trustworthy.

Trusting people is an interesting subject of its own - I should have
stressed the "to have at least a vague idea of what they're doing". In
fact, a simple worker might know a lot better what he's doing than a
scientist working on some complicated matter.
The problem is when people who don't understand innovations (and the
possible risks), try to gain whatever advantage from them.

> > Solution: Educate people accordingly. (easy to say - but I don't believe
> > it's possible until I see world-wide results).
>
> This is actually the solution to many other problems, like poverty and
> the spread of some bad memes.

I've been thinking about the "education" problem today. I believe that
education is the ONLY thing I'd be willing to pay taxes for. Good
education should be freely available to all human beings. I don't care
if a "government" or another company takes care of that, but it has to
be done much better than today!

[...]
> > But what if there comes a day where we have to face an insane fool who
> > has the technology to wipe out all life on this planet? Someone who just
> > doesn't understand or just doesn't care about the responsibility?
>
> This is a real problem, since even if we solve the problems you
> discuss, there may always be somebody who is deranged or clumsy enough
> to mess things up. Even without sociopaths and zealots there will be
> accidents and people who do dangerous things for good reasons.
>
> The solution is likely to empower people in general, so that it will
> be hard for dictators, fanatics and error to overcome
> all. Unfortunately this only works if the technologies are not of the
> kind that the first use wins (this was the cause of our major
> nanotechnology debate a few months back; was nanotech inherently such
> a technology, or could even deliberate nasty nanodevices be
> contained?). In this case dispersal seems to be the only strategy.

I missed that one... maybe I should look it up in the archives?

> > o________________/ Trevor Goodchild in "Aeon Flux" |
>
> "That which does not kill us makes us stranger" :-)

Trevor: "You're skating the edge" Aeon: "I am the edge" :-))

later,
Holger

-- 
o---------------------------------------------------------------o
| "That's the funny thing about memories, isn't it? We are not  |
| what we remember of ourselves, we are what people say we are. |
| They project upon us their convictions  -  we are nothing but |
| blank screens." ______________________________________________o
o________________/        Trevor Goodchild in "Aeon Flux"       |
                 \__ mailto:Holger.Wagner@lrz.uni-muenchen.de __|


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:45:05 MST