From: Eugen Leitl (eugen@leitl.org)
Date: Thu Jun 13 2002 - 04:19:02 MDT
Discussion is not over, but on hold. (The matrix has me).
On Wed, 12 Jun 2002, Smigrodzki, Rafal wrote:
> While Eugen's idea feels much more congenial, I am afraid Eliezer is
> right: it might be extremely difficult is not impossible to slow down
> SAI-oriented research while at the same time developing a practical
> method of uploading. This means that an unfriendly SAI is likely to
> appear long before our uploaded vanguard is up to the task of
> protecting us, and the best solution is an escape forward by
> purposefully building an FAI.
My risk probability estimate is different.
> Eugene, could you try to describe what practical measures might need
> to be used to halt SAI research reliably enough and long enough to
> give us the uploading capacity? How many policemen, soldiers, and what
> kind of laws would be needed?
I don't have time for a proper analysis, but two major factors is
outlawing insecure systems (by making vendors liable) and breaking up
present monopolies. This will make global networking more secure, reducing
resources available for bootstrap. Infrastructure for realtime traffic
analysis and hard network partitioning (sandboxing, watchdog cleansing,
strong cryptographic authentication, separate shutdown circuitry) needs to
be created. Research in open literature and hardware purchases need to be
tracked. Competent AI researchers need to be tracked. Dangerous research
should be conducted under strict rules of containment. And scooby dooby
doo.
None of above guarantees anything, but it does reduce the risks at
tolerable side effects (some of them are even beneficial).
If you think a loner can pull it off on a chunk of computronium while
leaving no traces you're being wildly optimistic. The risk is about the
same as truly successful (Gigadeaths) loner bioterrorist.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:46 MST