Re: [MURG] meets [POLITICS]

From: Adrian Tymes (wingcat@pacbell.net)
Date: Sun Apr 07 2002 - 17:56:15 MDT


Robert J. Bradbury wrote:

> On Sun, 7 Apr 2002, Adrian Tymes wrote:
>>At which point, with Kazaa taking excessive CPU power (unless the AI
>>part is rigged only to run in an @home style manner, which would make
>>parts of the AI run while others did not, basically at random - and
>>fixing it would slow it to the point of unusability), Kazaa users
>>simply switch to something that doesn't tie up their box.
>
> People generally don't run task manager to watch what is consuming the
> CPU time.

True. It's more common to turn off various things until the computer
starts behaving again.

> You could easily have the program monitor when someone is
> sitting at the terminal and typing, or moving the mouse, then go "inactive"
> for some period of time.

After a short, but very noticeable, delay to take care of any current
tasks. Seen it on the other @home clients.

> Sure you lose some of the concurrency between
> parts of the "mind" but it isn't clear precisely how much that matters.
> As Eugene pointed out alot of the processing may be asynchronous or
> may care nothing about what is going on in another part of the brain.

That's true. And, in any case, you'd have the neurons care about
whether their neighbors fired at time N-1 anyway. Just saying this
would slow things down to the point where it wouldn't be all that
practically useful.

> You solve the problem by timing by accumulating statistics as to when
> computers are least used and run the timing critical parts during
> those times. You also do what F@H does and send out multiple identical
> "work units" and accept the first returned results. You can implement
> system availablity and network bandwidth statistical monitors so you
> can develop local clusters that are "free" at specific periods.

Which leaves the system open to forged results (once anyone finds out
what's going on) and to acting improperly on a given user's computer
at some given time. Statistically, a certain percent of users will
become upset enough to leave - and, once they do, another certain
percent of users will become that upset...

> Also, because Kazaa is connected to a P2P network they are going to
> expect some level of activity devoted to it. So long as the AI
> doesn't immediately jump into "heavy use" mode and instead "gradually"
> consumes greater amounts of CPU, the user will assume its due to
> more users joining the net.

But when the load becomes too high, they'll move to something that,
while it may have less users, also gives them their computer back.
Unlike the classical example, in this case there is, at any given time,
potential for a frog's friend to tell them how great it is outside the
boiling pot, or for the frog to find out for itself.

>>Potential
>>for hacking is one thing, which many people ignore; *actually doing it*
>>is something else entirely.
>
> I'd be happy to show you my server logs that still continue to accumulate
> Code Red and Nimda virus assaults.

Those have little to no impact on the system - unless they do, and the
system is at all monitored (like a desktop, not like a server you
stick in a colo and walk away from) in which case they tend to get
noticed.

>>(For instance: few people complained about
>>the possibility of spam in the pre-Cantor-&-Siegel days.)
>
> True, but spam is just a nuisance. A self-bootstrapping rogue AI may be
> the end of humanity.

<humor>
"Yet another end of humanity predicted! Film at eleven."
</humor>

Seriuosly: nuclear war could also be the end of humanity, yet many fewer
people complain about military solutions when their country is not
actively involved in a war. Not to mention various plagues (natural or
manmade) whose cures could only be found by an AI - and, given when
they could strike, a self-boostrapping "rogue" AI free of procedural
development constraints may be the only way to get such an AI in time.
In which case, *preventing* self-bootstrapping rogue AIs may mean the
end of humanity.

Frankly, the known harm that your solution would impose far exceeds the
risk times possible danger of a self-bootstrapping rogue AI.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:19 MST