Robin Hanson wrote:
>
> It occurs to me that a common theme in many dreams of the future is an
> unusual degree of autarky, or independence.
That seems right, although sf stories usually dramatize the "what could go wrong" aspect of this. I think of "interstellar ark" stories in particular. IIRC, Heinlein's story "Universe" had a two-headed mutant barbarian from a spaceship's drive core trying to invade the farming communities in the shielded areas (or something like that).
>
> I find it hard to escape wondering if these dreams of autarky are not
> mostly the
> result of humans not yet coming fully to terms with our new interdependence.
> Biology created largely autonomous creatures, and only recently discovered a
> world-wide division of labor in us. So our cells are largely
> autonomous manufacturing plants, our minds are general and broadly
> capable, and we picture our ideal political unit and future home to be the
> self-sufficient small tribe of our evolutionary heritage. (Such tribes had
> much to fear from strong dependence on neighboring tribes.)
Don't forget that it's just plain simpler to look at a complicated global situation by way of ignoring interdependencies most of the time, so as to focus on whatever the local problems are. You are doing this yourself when you think of organisms as "autonomous" while ignoring the ever present demands of ecology and of trying to make a living in those biological niches! Admittedly, though, the long distance trading relationships of humans are a really new form of interdependence. Ordinary animals don't deliberately give up one thing so that they can get something else from other animals halfway around the globe.
Anyway, as examples of autarky dreams, you mention strong AI, nanotech, space colonies, and:
> LOCAL SINGULARITY: This imagines one small group suddenly grows big enough
> to take over everything. Our familiar world economy grows as a unit, with
> innovations and advances in each part of the world depending on advances
> made in all other parts of the world. The problem of designing smaller
> chips, for example, keeps getting harder, but a richer world can afford to
> spend more and more solving this problem. A local singularity, in contrast,
> would have a small group continue to make dramatic advances by substantially
> improving their own ability to make more advances. The abilities of such a
> group might quickly grow so large that they can essentially take over
> everything.
At that point, the coup-master either does things to destroy the free trade economy, or more likely does his best to control more and more of it. Basically, an isolated super-tech advance is just one more wayfor someone to try this sort of thing.
Is this something to worry about, that the world's most powerful countries might get tech-couped, and just roll over the rest of the globe in some strange new empire? This seems to be the center of your concerns, but I'm not sure that your comments do anything to prove that it's really impossible! Scary stuff, this, and if the existing "Powers That Be" find some reason to get seriously concerned about it, maybe *they'll* evolve into something as bad as a coup-master would have been! I'd see your own emphasis on the likelihood of economic interdependence as a way to forestall any scary tech-coup scenarios?
>
>
> So manufacturing plants may slowly get smaller and better, without a
> sudden "assembler" revolution. Local space may stay un-colonized until we can
> cheaply send lots of mass up there. Software may slowly get smarter and be
> much
> smarter than people long before anyone bothers to make a single module
> that can pass a Turing test. And distant stars may not get colonized for a
> long long time.
Fortunately, the step by step nature of technology advance *does* tend to argue against the likelihood of an easy or thorough global coup. Actually, to be a bit more precise about it, surely the halting, trial and error process of technology argues against a global coup done solely or even primarily through tech advance! Here I go with a recollected science fictional reference again; does anyone recall an Arthur Clarke short story where the high tech military cruisers got all beat up by the relatively low tech guys because of not having the bugs worked out yet? Inasmuch as AI would have to be a *practical* development, I'd tend to agree that it isn't some sort of "world takeover magic".
As to comments about manufacturing plants and space colonies, aren't these really quite separate issues? Here's an analogy. Suppose someone a hundred years ago had gotten quite upset about the prospect that wireless telegraphy, or radio, just might allow some empire to expand it's armies farther than ever before, maybe even taking over the world! If you think about it, there must have been an actual day in history when the first army issue radio transmitters appeared in the hands of field commanders, *that*, I suppose, would be the "Radio Singularity" -- which we've already passed, sometime early on in our current, twentieth century! Suppose that our previously concerned person had made a prediction that radio was too difficult, that its adoption couldn't happen for a long time because that would ruin the interdependence of the world! That would surely have been a wrong way of looking at things, wouldn't it?
In sum, concerns about technology based coups don't seem to me a good reason for judging one way or the other as to whether any special technology will be readily put into use. For nano-factories, what are the likely mechanics of them, and will some kinds be much easier to debug than others? For space colonies, do the nano-factories provide for economical lunar mines and resource use, so that people can actually build the things? Maybe political philosophy really does get in the way of thinking about the future, but why not at least take the prospects for breakthroughs into account as best we can?
David Blenkinsop <blenl@sk.sympatico.ca>
"We're going to do the same thing we do every day, Pinky, TRY TO TAKE OVER THE WORLD . . . "
I always wanted an excuse to use that line for a sig :^)