"Bryan Moss" <bryan.moss@dial.pipex.com> writes:
> If you make a high-level abstraction (that is, a piece of code that
> recreates the exact input and output) of a neuron do you preserve subjective
> experience? If you make a high-level abstraction of the entire brain do you
> preserve subjective experience? What is identity? How much is different?
> Even if you do think uploading is possible you're still faced with hundreds
> of currently unanswerable questions.
Exactly. But the experiments leading up to uploading and uploading itself will answer many of these questions. The final answer will of course be when you upload yourself and a certain digital mind begins to experience whatever it experiences.
> > A claim like 'no mind can be smarter than a bright human' makes about as
I have the impression that you are not seriously proposing any upper
limit on intelligence,
> > much sense as 'any mass of gold more than ten miles in diameter will
> > spontaneously combust'. Neither claim can actually be tested, but it
> > would be amazing if either one were true.
>
> Why do we have such small brains? To me it suggests that the level of
> complexity achievable is *very* close to the achieved level.
But large brains are not impossible, just look at the blue whale (whose cerebellum, interestingly enough, appears to be human sized while the rest of the brain is huge). Lots of neurons isn't the same thing as intelligence.
> Since we're talking about plausible future scenarios it might be fun, being
> in the midst of millennium fever, to come up with some. No dates or
> predictions, just how you think the next few major technologies will
> pan-out. How about it? (And fifty years from now, when we're all six
> centimetres tall and living in habitat domes on the moon, we can have a good
> laugh at them.)
My guess is that the Big Things the next 50 years will be nanotechnology / biotechnology (both areas will likely merge rather than remain distinct), the understanding of the brain and the resulting neurotechnologies, and finally the Theory of Complexity. No. 1 feels fairly straighforward, the other are a bit more risky speculation.
Nanotech would simply emerge and cause a lot of revolutionizing changes; it is too broad, too useful, too normal in some sense (making and manipulating things isn't something utterly *new*). Expect quarrels over the applications and fears once the nanosystems start to become truly powerful, but in the end it is the same question as has plagued the last two centuries: how to distribute the wealth and means of production, and to what ends.
Neurotech is going to be the controversial thing. What does "human" mean when you can alter it? Not just enhance memory or change sexual preferences, but add *new* structures to the brain like the ability to do something like episodic memory for muscle movements? Lots of issues here, and biotech will contribute with problems.
Complexity is the subtlest part. If my guess/feeling is right Kaufmann and the others are right and there are "laws of complex systems". It will be an invisible revolution, first in the sciences, then in technology (such as the above areas) and slowly also in human thinking. Remember that the Newtonian view took several centuries to spread, and that most people still don't have digested evolution, quantum mechanics and Godel. Complexity would likely be even more profound.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y