I suspect that an entity driven by values that were trivial to
translate into micro-level descriptions of the value-optimal physical
state would quickly succumb to competition from strategies of
entities with more complex values with more unpredictable outcomes.
An entity with more complex values would not find it trivial to do
calculations of optimality, and would likely refrain from destroying
computing hardware for fear of inadvertently introducing a suboptimal
overall state. Particularly since market interactions allow one to
freely exploit as much computing power as one can pay for (and I'm
including human attention and cognition in here as well... many of
us on the list are professionals paid for our personal computing
time).
> That depends on what the values are. But whatever they are, we can
> be pretty sure they are more likely to be made real by a
> superinelligence who holds those values than by one who doesn't.
> (Unless the values intrinsically involve, say, respect for
> independent individuals or such ethical stuff.) The superintelligence
> realizes this and decides to junk the other computers in the
> universe, if it can, since they are in the way when optimising the
> SI's values.
I don't know. I see existing hardware as a resource to be used,
not as raw feedstock atoms. People fantasize a lot about ab-initio
scenarios, but ab-initio stuff on a large scale is usually much
less economical than bootstrapping from real, existing, computational
resources. Current economic theory (I still favor the Austrian
school) looks abstract, general, *and* solid enough to me that I
doubt superintelligences are going to be able to evade its dictates.
-- Eric Watt Forste ++ arkuat@pigdog.org ++ expectation foils perception -pcd