Re: Submolecular nanotech [WAS: Goals]

From: Anders Sandberg (asa@nada.kth.se)
Date: Mon May 24 1999 - 12:22:07 MDT


"Raymond G. Van De Walker" <rgvandewalker@juno.com> writes:

> Ander Sandberg (asa@nada.kth.se) said:
> >The problem seems to be that it is impossible to test very complex
> >systems for all possible contingencies, and this will likely cause
> >trouble when designing ufog. How do you convince the customers it is
> >perfectly safe?
>
> I program medical and avionic systems, and the general criteria are
> pretty straight-forward.
> You test the thing for designed behavior, and then you test it for
> environmental
> benignity (e.g. operating-room equipment has saline solution pured on it,
> and
> stell rods poked into open orfices, and various shorts on teh pweor plug)

Sounds reasonable. But medical and avionics systems have to deal with
fairly well defined environments; the number of things that might be
thrown at ufog in an ordinary home (just imagine what the toddlers do)
are astronomical. Hmm, that actually suggests an ufog problem: getting
foglets into liquids where they shouldn't be - how can we guarantee
that none of the fog gets into our food?

> >You get the same problem with AI: what testing would be required
> >before an AI program was allowed to completely run a nuclear power
> >plant?
> Well, speaking as a professional safety engineer, I think this would be
> an easy
> argument. Just test the program with the same simulation used to train
> the
> human operator. If it does ok, then it's ok.
>
> However, most regulatory environments require that no single fault should
> be able to induce a harmful failure (this is simple common sense,
> really).
> Therefore, one might have a much easier time with certification
> if there was a second AI, with a different design, to second-guess or
> cooperate with the first. This makes the system far less prone to fail
> the first time an AI makes
> a mistake, And, it _will_, right?

To err is human? :-)

Yes, a double system would be reasonable. Most likely human
supervisors may be left too, so that there is somebody to personally
blame for everything :-)

-----------------------------------------------------------------------
Anders Sandberg Towards Ascension!
asa@nada.kth.se http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:48 MST