Not everyone believes that "Coding a Transhuman AI" is the correct course of action. I'm certainly unsure about it, and I've read your argument many times.
I've got a big problem with your interim goal system. You claim that "some goals have value" and "all goals have zero value" are P&~P pairs. This is not the case. Consider the argument from agent-relativity: "some goals have value to the agent and not to anyone else." In other words, "some goals are subjective."
It is philosophically suspect of you to claim that the only possibilities are "objective meaning" or "no meaning at all." My subjective goals are very real goals, but they only have value to me, and not to you.
If subjective meaning is true, then it does NOT drop out of the equation, as does the claim that "all goals have zero value." It means that there are some very specific things that I should do, and some other specific things that we should do, and they may or may not agree with what a transhuman AI should do. Indeed, agent-relativity may mean that two intelligent, rational, moral persons will be correct in being in direct conflict with one another.
I suspect that I will probably correctly disagree with a transhuman AI, and that the AI in such a situation will win if I'm not already some kind of a power in my own right. If agent-relativity is correct, then I am correct in not supporting your goals to code up the Singularity.
Here I'll raise the tried and true "what if it tries to break me down for spare parts" argument, from a different perspective. Consider a result in which I am broken down and used by the AI for spare parts. The AI may give this result a positive value. I may give this result a negative value. If subjective meaning is true, we are both correct, which means that I should not support the creation of an AI who would do such a thing.
Even if you don't believe that this scenario is likely, we can imagine many others. The point being that if subjective meaning is true, then it is not true that we should build an AI whose interim goal is to figure out the objective meaning of life is.
That's it. Subjective meaning doesn't drop out of the equation, and provides different answers from objective meaning. Factor that in, and I'll buy your AI argument.
-Dan
-IF THE END DOESN'T JUSTIFY THE MEANS- -THEN WHAT DOES-