Spike we think about it every day, believe me it is kinda constantly sitting
there in the back of my head. And I'm not even involved in the day to day
work! The quick answer is that we plan to make progress on the "safety" area
as we go... right now this is all in the conceptual (or pre-conceptual?)
phase, and you will likely see at least one kind of equally conceptual
"safety" paper produced relatively soon. As we begin to get a firmer grasp
on the technology we will be able to reduce the risks further and further,
and produce more substantive documents describing this. It is not currently
in the plans to have an "off switch", although that might become more possible
in the future? Our main intention is to reduce all possible risks before
hitting the "on switch", kinda like how Foresight wants to reduce all the
risks of nanotech before the public gets ahold of it.. except that they have
no real control over how things turn out, and we do since we at least have
perfect control over the initial conditions and can run simulations to see
how things might progress from there.
P.S. we aren't the only ones out there... recently we have come across a
competing AI project that we rate as having a significant (greater than zero)
chance of "waking up"... and it is set for completion circa 2003 at the latest.
I really think it would be good if there was an equivalent of Foresight for
the AI area... Foresight for now is still so focused on nanotech they don't
see the chance to expand into a more general Foresight organization covering
all of Bill Joy's worries.
Spike Jones wrote:
>
> Brian Atkins wrote:
>
> > Mainly just in case Eugene or den Otter show up? :-) Don't get me started
> > on our bunker and small arsenal of weapons...
>
> Do allow me to make one semi-serious point among the mirth
> and jocularity. Whenever the nanotech enthusiasts speak publicly,
> they point out the dangers of the technology, and spend some time
> describing possible safety measures, such as Ralph's broadcast
> architecture. Every time I hear Drexler or Merkle speak, they
> mention safeguards.
>
> What is Singularity Institute's counterpart to that? Are yall thinking
> safety? Is there some kind of technology which would allow yall
> to hit the off switch if something goes awry? You know Im a big
> fan of the HWoMBeP and all of yall, but if yall are rushing headlong
> into unknown and dangerous technology without appropriate safety
> measures (if such a term even applies) then I understand some of
> the criticism yall are getting. spike
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:19 MDT