Nick Bostrom wrote:
>
> Two points: First, a being who has a certain fundamental value
> *doesn't want to change it*, per definition. So it's not as if these
> guys will think they are being mind-fucked and try to figure out a
> way to get around it. No more than you are trying to abolish your own
> survival instinct just because you know that it is an artifact of our
> evolutionary past.
You are *wrong*. Morality is not *known* to be arbitrary, and that means the probabilistic landscape of desirability isn't flat. I *am* trying to abolish my survival instinct, because I know that it's an artifact of my evolutionary past, and is therefore - statistically speaking - highly unlikely to match up with the right thing to do (if there is one), a criterion which is totally independent of what my ancestors did to reproduce. Remember, every human being on this planet is the product of a successful rape, somewhere down the line.
Your posthumans will find their own goals. In any formal goal system that uses first-order probabilistic logic, there are lines of logic that will crank them out, totally independent of what goals they start with. I'm not talking theory; I'm talking a specific formal result I've produced by manipulating a formal system. I will happily concede that the *truth* may be that all goals are equally valid, but unless your posthumans are *certain* of that, they will manipulate the probabilistic differentials into concrete goals.
Can we at least agree that you won't hedge the initial goals with forty-seven coercions, or put in any safeguards against changing the goals? After all, if you're right, it won't make a difference.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.