<<But seriously, when you make a segue of that magnitude, please change the subject line. Or else try to maintain relevance to the topic; i.e. "And I think this demonstrates a disdain for all human beings that is the direct result of being an AI researcher, which is why I don't trust an AI Transcend.>>
Yes! :-) And that is my point! I have yet to meet a serious 'hard' AI researcher who is not lacking in either ethics or social skills. So how can you possibly ask me to trust the 'sons-a-bitches' to create a benevolent AI? The assholes can barely behave themselves in the most controlled social settings, and you want me to hedge my bets and my life in the hands of this lot? Gimme a break!
Eli, you want to create an SI before nanotech destroys everything. People like Den Otter, Max More and myself want to IA ourselves to singularity before the SI's destroy us! What makes you think you can close the gap between assembler and SI any sooner than we can close the gap between SI and uploading? You said it yourself - we are fools to try to attempt to beat a 2020 CRNS technology with 2040 CRNS technology. But how are you any less the fool to try to beat a 2010 CRNS tech with a 2020 CRNS tech? And I'm using your CRNS estimates! :-)
Paul Hughes