From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jun 28 2002 - 21:13:24 MDT
At 06:32 PM 6/28/2002 -0600, Ben Goertzel wrote:
>Novamente does not yet have a goal system at all, this will be implemented,
>at my best guess, perhaps at the very end of 2002 or the start of 2003.
>Currently we are just testing various cognitive and perceptual mechanisms,
>and not yet experimenting with autonomous goal-directed behavior.
Boy, you really don't have much chance of a hard takeoff yet.
>A failsafe mechanism has two parts
>
>1) a basic mechanism for halting the system and alerting appropriate people
>when a "rapid rate of intelligence increase" is noted
>
>2) a mechanism for detecting a rapid rate of intelligence increase
>
>1 is easy; 2 is hard ... there are obvious things one can do, but since
>we've never dealt with this kind of event before, it's entirely possible
>that a "deceptive intelligence increase" could come upon us. Measuring
>general intelligence is tricky.
I agree with this. But you could start with a fail safe with somewhat of a
hair trigger. Put in numerous types of heuristics that detect rapid or
substantial change in any system or in the quality or frequency of the
output. In such a case have it pause the system and notify the
staff. Tune the heuristics over time to produce less unnecessary
pauses. But at any time it would be better to pause too often than not to
pause at a critical junction.
It would be far from perfect, obviously. But it would be a good point to
start from and better than having nothing.
Based on your current state of progress I wouldn't say this is required
ASAP, but you should be at least planning this. And you should have it
implemented and tested before the goal system goes live I'd suggest.
> > If anything even close to this looks
> > likely you better be getting opinions of hundreds or thousands of
> > relevant
> > experts. Or I'll come kick yer ass. ;) Seriously.
>
>Seriously -- this would be a tough situation.
>What if one of these thousands of relevant experts decides the system is so
>dangerous that they have to destroy it -- and me. What if they have allies
>with the means to do so?
Yeah, well I wasn't serious about literally consulting thousands of
experts. But you should, in my opinion, consult very many if your belief
is that Friendliness can't be sufficiently implemented. And you should
consult a fair number before kicking off any Singularity shot.
If one person thought the system and you should be destroyed I'd most
likely disregard it. Unless, that is, they were able to start convincing
others to switch their vote. At which point you'd have to seriously
reconsider it (except for the destroying YOU part - that is insane). I
don't believe you need complete consensus to proceed.
> > What is the trade-off point between risk and time?
>
>My own judgment would be, in your scenario, to spend 3 more years
>engineering to lower the risk to 3%
>
>However, I would probably judge NOT to spend 3 more years engineering to
>lower the risk to 3.9% from 4%
Well, considering how inaccurate it is to discuss this risk as percentages
I imagine 0.1% is actually below the margin of error. So, while I still
don't like even a 0.1% chance of failure (at all), I can see your point.
>These are really just intuitive judgments though -- to make them rigorous
>would require estimating too many hard to estimate factors.
Yes.
>I don't think we're ever going to be able to estimate such things with that
>degree of precision. I think the decisions will be more like a 1% risk
>versus a 5% risk versus a 15% risk, say. And this sort of decision will be
>easier to make...
Very true.
> > What if another team was further ahead on this other design than yours?
>
>It depends on the situation. Of course, egoistic considerations of priority
>are not a concern. But there's no point in delaying the Novamente-induced
>Singularity by 3 years to reduce risk from 4% to 3%, if in the interim some
>other AI team is going to induce a Singularity with a 33.456% risk...
Excellent answer. The best course of action would be to stop the team with
the 33% risk of failure (at any cost I'd say given that number). But if
they could not be stopped I'd endorse starting a less risky Singularity as
an alternative.
>In fact neither Eliezer nor I wishes to *force* immortality on anyone, via
>uploading or medication or anything else.
Yeah, I know. That was just a convenient example of differing morality.
>Interestingly, in many conversations over the years I have found that more
>women want to die after their natural lifespan has ended, whereas more men
>are psyched about eternal life. I'm not sure if this anecdotal would hold
>up statistically, but if so, it's interesting. Adding some meat to the idea
>that women are more connected to Nature, I guess... ;)
I've had similar indications to yours, it seems. Though, thankfully, my
wife is open to the idea of immortality. Actually, let me hold off on that
"thankfully" for a thousand years or so. No one knows what a 1,000+ year
marriage would be like yet. ;)
James Higgins
P.S. Can anyone explain why on earth I can spell heuristics correctly but
not manner?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT