Jeff Davis wrote:
>
> Ah, but if the proto-Transcendee has limited hardware resources to "run"
> on, then it will be inherently limited. Optimized self-evolutionary
> enhancement capability, and optimized code-designing and writing capability
> will run into this limit. Every system has a size limit. Whatever amount
> of hardware is the minimum amount necessary to support the first-generation
> pre-enhanced AI will also be the maximum amount of hardware available to
> optimally-enhanced n'th generation "Transcendee". The jump from minimum
> efficiency to near optimal may be substantial, but how can it be unbounded?
My current guess is that a first-stage seed AI is so massively inefficient that the hardware it runs on should easily be sufficient for transhuman intelligence, once the code itself is being written by only-slightly-less-than-transhuman intelligence. I don't know about superintelligence, but transhumanity should be sufficient.
See http://pobox.com/~sentience/sing_analysis.html, the part about seed AI trajectories.
> So the AI development should be controllable (dare I say "simply"?) by the
> rather conventional approach: experimenting with and coming to understand
> the correlation between the size and quality of "the jump", the particular
> version of AI seed programming, and the hardware size and architecture.
I really don't see people running multiple seed-AI experiments. The very last seed AIs, as in the last programs humanity ever writes, are likely to be too expensive. By the time they would have become cheap...
> >Unfortunately, this is technically impossible. If you can't even get a
> program to understand what
> >year it is, how do you expect complete control without an SI to do the
> controlling?
>
> This is one of the problems. If you have to give it self-control, then you
> contain it and communicate with it. If it says what you want to hear, then
> you proceed. If not you tweak the code till it does. This way you develop
> a controllable (perhaps "reliable" would be a better term) "personality".
> Then you give it more hardware to work while watching for any signs of
> "attitude".
I'm beginning to understand human cognition, and I can see - dimly - the level on which we're all deterministic mechanisms. A transhuman - I don't even say "SI" - would simply tell us the set of inputs that would result in the behavioral output they wanted. I can't do that yet, or anything remotely like it, but I have hopes of learning a few parlor tricks someday.
> Then they will ask--no, they will require--you to do the impossible. (Which
> of course is the greatest challenge, and--as the saying goes--takes a
> little longer.) All the really juicy bargains come with equally juicy
> strings attached.
Coding a transhuman AI is as close to impossibility as I care to flirt with. Anyone wants to attach strings to a job like that, they can take their strings and shove them up the ass of whatever COBOL drone they con into taking the job. *I* am not going to lose *any* sleep worrying about their device attaining a level of performance higher than pocket calculator.
> When your passion faces off against your principles, then it will be your
> turn to choose.
I'm not presenting this a principled thing. You *cannot* design a controllable self-modifying system, and you *cannot* make a non-self-modifying AI work. It's a design fact.
> Ideally, yes. A controllable what-it-does which does what it does better
> than a machine with a will is best. If, however, a machine with a will
> would be inherently better (which I warmly believe), then that's what they
> will pursue, along with the means to control it. More layers of
> containment and a firm grip on the plug.
I say again: "Control" of a self-modifying AI is even harder than controlling a human. And non-self-modifying AI will be dumb as a brick; it's the capacity for self-enhancement, and positive feedback in self-enhancement, that replaces the millions of years of evolutionary optimization that holds the human mind together.
> >Run an open-source project via anonymous PGP between participating
> programmers.
>
> I'd really like to see that happen. However, just as the powers that be
> would flat out not let you build a nuke or a lethal virus (except under
> contract to them and under conditions of strictist oversight), they're not
> likely to sit idly by while you and your pals cobble together you own pet
> SI. (I saw the "anonymous PGP" part. Since you know you need it, you know
> why you need it. Can you carry it off covertly? No slip ups? That's a
> tough one.)
The question wouldn't be preventing even one slip; the question would be organizing a project that didn't care about slips, or even about FBI agents masquerading as members. The code would have to be self-checking and modular to the extent of being able to function with sabotaged components, and the project itself would have to be carried out with complete anonymity on the part of all participants. The project would have to be mirrored on each individual's hard drive, and kept in sync independently; the AI itself would have to run distributed with untrusted communications and untrusted software modules. It adds another design problem to the AI, but it's a design problem that might be good for the project.
> Just the same, I say "Go for it!" I suspect that a "good" AI may be the
> only feasible defense against a "bad" one.
Not really. If I'm right, it should be fairly hard to design a "bad" AI that's a serious threat - AIs with weirdwired motivations shouldn't be able to get to the point of independence, though they might be used as awful weapons. Actually, AIs with weirdwired motivations aren't *intrinsically* incapable of mildly transhuman intelligence - just superintelligence. But the kind of COBOL drones who would design AIs with weirdwired motivations have basically no hope of even making much of a weapon.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way