From: Tony Belding (tlbelding@htcomp.net)
Date: Mon May 04 1998 - 23:05:27 MDT
On 29-Apr-98, Dan Fabulich wrote:
>NOTE: Upon rereading this post, I've come to the conclusion that we're
>talking about slightly different things. So I'll try to clarify wherever
>possible.
I think you are probably right about that.
>Under my definition, "non-sentient AI" is oxymoronic. My understanding of
>what we were talking about were robots which would only do what they were
>programmed to do.
Hm... To me, if it has to be "programmed" in the conventional sense, then
it's not AI. Programming an AI should be a process of simply telling it what
you want, then the AI figures out how to do it.
To me, the main thing that distinguishes sentience is the source of the
robot's motivations. A sentient robot, like any worker, ultimately works for
its own benefit. A non-sentient robot works for its owner.
>It is profoundly unlikely (read: pigs will
>fly) that that the RATIO of different types of goods robots and humans
>could create in a day would be exactly equal, across the board for every
>type of good. (It's not even the same among individual humans!) So humans
>will certainly have comparative advantage in some goods.
Eh? I don't see anything certain about that. I agree that the ratio will
vary among different types of goods and services, but I assume that the
advantage will practically always be in favor of the robots over old fashioned
flesh-and-blood humans.
>Well, yes. So, here's my case for population growth, taken mostly from
>Julian Simon's "The Ultimate Resource 2."
>1) Axiom. Promoting the greatest happiness for the greatest number is an
>important moral good.
That is a very questionable axiom. I would contend that we want to promote
the greatest good for the greatest /fraction/ of people. In other words, if
you have 20 million people and 90% are happy, that's a better situation than
six billion people of which 20% are happy.
If you take a moral (and therefore arbitrary) position that a large
population is better than a smaller one, with other things being equal, then
you are naturally going to want the creation of more people -- including
sentient robots. But if you follow that to its logical conclusion, you are
eventually going to run into hard limits: energy supplies, waste heat, raw
matter supplies. This is a serious concern, unless you can repeal the laws of
physics.
Another point is that there may be other moral imperatives to consider, which
are more important than happiness. Happiness is usually rooted in mere
biology, and IMHO is a shallow goal for human beings. But that is another
discussion entirely. It is probably outside the realm of economics.
>5) Therefore. If resources do not run out, population growth will result
>in greater wealth per capita.
Ah! But resources /do/ eventually run out. Note also that the existence of
AI can skew this assumption. Population growth may create an increase in
wealth per capita, but creation of AI workers can create a /greater/
increase, especially since your AI robotic workers don't count as "capita"!
If our goal is the greatest happiness for the greatest /fraction/ of the
population, the quickest way to achive that is by adding non-consumer robots
to the economy. They don't expand the population, they don't consume or
increase overall demand, they only produce.
>When a resource becomes scarce, its price increases. As a result of this,
>fewer people buy the good, noting that it is not worth its price. As the
>price rises, we observe that the resource is conserved, by virtue of the
>fact that fewer people consider the good worth its price, so fewer people
>are prepared to use it, waste it, etc.
Yes. And this also implies that the quality of life of those people is
reduced. It is true that we have good economic mechanisms for dealing with
scarcity -- and no wonder! That's what any economy is based upon. Still,
that doesn't mean scarcity is good for the economy. For analogy, just imagine
you have a hill-climbing motorcycle. It's designed specially for climbing
hills, and it's *good* at it. That doesn't mean it will perform better every
time you pit it against a steeper hill!
>Note that we have to assume that people must work in order to get more
>money, if this process is to succeed. Otherwise, people will not bother to
>seek cheaper alternatives.
Say what? I don't follow you. You seem to think people who get their money
by other means than working are inclined to waste it recklessly. Of course,
a few people actually do that, but then so do some people who work for their
money. Go figure.
>When Julian Simon was writing, he argued that resources were actually
>infinite, and that those who were arguing for finite "scarce" resources
>were wrong. This may not be true, as written. However, it IS true that
>the amount of resources which are available to us are determined primarily
>by the technology which we develop, not by the remaining quantity of any
>particular resource.
I don't take such a rosy view. First of all, there is a limit to the
technology that can be developed. All technology is based on the laws of
science and our understanding of them. When we fully understand those laws
and have exploited them in every way that is practical, there will be no more
technology to develop. Then we will have access to all the resources than can
be accessed.
Advancing technology is something that only happens during very strange
circumstances. Right now we are in the middle of a technological revolution,
a brief period of great change and upheaval. Before this revolution began,
about five thousand years ago, people lived a simple hunter-gatherer
existence. They had lived that way for tens of thousands of years in a
*stable* condition. After our technological revolution (which we call
"civilization") is finished, we will reach another condition of stability in
which there will be little change over many thousands of years. That stable
condition will exist at an equilibrium point defined by both our final level
of technology and our total available "land" resources.
>are pretty certain about. I don't anticipate any perpetual motion machines
>in humanity's future. However, technology is pretty sneaky, and engineers
>are a crafty bunch. So while we may not be able to get around this
>particular limit, there are other mechanisms, other means of extracting
>energy, which may do the trick for us.
I don't like to bet our future on miracle technologies that aren't based upon
any known laws of science.
>I do not know if there is an ultimate limit to what technology can do for
>us. I suspect that there might be; but then, much of the evidence for
>ultimate limits which we have seen in the past have eventually been
>overcome.
In my experience, few of those "ultimate limits" were based on hard science
and hard numbers. It may be misleading to draw lessons from those failed
predictions.
>6) Observed. Resources will not run out if we do not run out of useful
>technology.
That depends on how you define resources. Imagine some oil is trapped deep
in an underground shale deposit, so that it cannot be extracted economically.
If a new technology allows you to get it, does that give us a new resource,
or does it merely allow us to *deplete* a resource that we always had?
Most new technology falls into this category: it allows us *access* to more
and more resources. But the hard limit still remains: there is only so much
oil locked in the earth. Likewise, our solar system only contains a certain
amount of raw matter, and the sun only produces a relatively fixed energy
output.
>Under this understanding, yes. However, I was answering a different point
>when I composed this. First, I was asked how the poor would keep their
>jobs in the face of automation.
It is a serious concern.
>I answered that so long as the robots were sentient, more jobs would be
>needed, so we could expect that humans would keep their jobs.
IMHO, that could be a cure worse than the disease. It's make-work! It
reminds me of a silly SF novel I read several years ago. Some crackpot
tinkerer had invented cheap fusion power, and it created such abundance that
the economy was inverted: people had to *consume* for a living. Every day the
drudgery of having to consume your quota of goods and services! Finally some
genius (though he was actually drunk at the time) figured out a solution. He
programmed his robots to consume! At first he was accused of *wasting* goods
and services (why did they care?), but then he showed that he had programmed
the robots to *enjoy* consuming these things. So, the problem was solved:
the robots could produce goods and services, and they could consume the
excess, so humans were able to simply take what they wanted.
It was absurd, of course. The common-sense solution would have been to cut
back on production. (The author weaseled out of that by hinting it was
impossible to cut back production, but without ever explaining why.) What
you are proposing is a variation on the same story. You're talking about
creating more workers to produce more goods, and also consume them, thus
creating a demand for more workers, so the cycle continues. It seems so
pointless!
Of course, that is somewhat how our economy has been *forced* to operate
until now, since only living, breathing human beings are able to do much
work. It's part of the human condition. But that's exactly what I want to
FIX, not embrace as a model for the future. We want to move beyond the
human condition, not enshrine it.
>Now, I must agree that increased non-sentient automation would lead to
>decreased prices and increased abundance, as you say. However, during the
>transition period, wages and prices might not be as flexible as we might
>like. If wages fell faster than the price level, or if wages couldn't fall
>with the price level (due to unions, minimum wage laws, etc.) then what
>we'd find is people starving in an age of plenty.
Yes. Marx vindicated. Of course, there must be some kind of economic
adjustment to deal with this problem. I just don't think you've hit upon it.
>Having sentient robots would ease this transition a lot, because they would
>create jobs in the process, rather than just filling them.
Are they really easing the transition, or are they perpetuating the
problem? This reminds me of the battle over free market reforms in Russia.
If you attempt economic shock therapy, the pain and dislocation can be
tremendous. On the other hand, if you try to "ease the transition" you may
create an economic legacy that will drag down the whole system for decades to
come.
>And finally, in the spirit of the post above, who is to say that the
>happiness of a sentient robot is not as great a moral goal as improving our
>own happiness?
Those sentient robots don't exist yet. I'm all in favor of increasing the
happiness of people who exist, but I don't see the point of creating more
people only for the purpose of making them happy. An unhappy person is a
problem to be solved. A happy person is a minor problem, since he presumably
could be made even happier than he is now. A non-existent person is a
non-problem.
>Presuming these robots would continue the human trend of
>producing more than they consumed, (a trend from which I see no reason a
>sentient robot would break,) adding sentient robots to our population would
>have the same effect, if not an amplified effect, as increasing our
>population would.
There you have it. You feel increasing our population is a good thing. I
don't. Look, maybe I haven't stated this clearly enough: I feel that
uncontrolled population growth is the ONLY serious problem facing our
civilization in the long run. PEOPLE are replicators with an exponential
growth rate, and they will eventually over-run any fixed resources. Creating
sentient robots is like throwing gasoline on a fire. RE: Malthus
>Well, it goes like this: as Lorrey will tell you, "There Ain't No Such
>Thing As A Free Lunch." Even if we do decrease the price level to
>something absurdly low, we should still expect to make micropayments in
>order to receive our daily bread.
That depends on the accounting cost of recording and tabulating those
micro-payments. Somebody once calculated that phone companies could make more
money by charging a flat monthly fee for long-distance service. Their billing
costs would be reduced tremendously, and just think of all the business they
would get! But of course, the phone companies didn't listen.
>Alternately, engines like advertising
>could take over, but advertising applies only to people who COULD buy other
>goods; if you're not working, then you have no income, and so advertising
>to you would be pointless.
Excuse me? "...if you're not working, then you have no income..."
There are certainly other ways to get income, rather than by working! The
whole heart of my economic concept is that most people who are workers today
should become capitalists: sellers of capital instead of sellers of labor
services.
>So I put forward that work must happen in order to keep things running.
>Maybe very LITTLE work needs to happen (maybe people could be paid a living
>salary for a tiny service, like some of the work http://www.distributed.net
>is doing), but work must happen, because it takes a little work in order to
>feed you; even if it's just a very little amount of work.
Well, there's no point in quibbling over whether work will be abolished
completely or merely reduced to a trivial role. I don't honestly expect work
to disappear totally. Nothing changes that fast, that completely. Even if it
makes no economic sense, there is tradition and nostalgia to deal with. But I
don't see how labor can continue to be the hub around which our economy
rotates.
Let me ask: Assuming we have AI robots, why must there be work in order to
feed me? If we have robots, they can raise food, they can process it, bring
it to me, and even cook it for me. I can pay for the food using income from
my investments. Where is the work?
>If wages and prices were fully flexible, then I'd probably just agree with
>you, argue that increased automation will decrease the compensation for
>labor but at the same time make that compensation exponentially more
>valuable, resulting in lots of people living on low wages and getting rich
>in the process. I have made that argument here before.
NO WAGES. You want lots of people living on NO WAGES, but getting their money
from other sources of income.
>>From a purely economic standpoint, my definition of sentience is very easy:
>>a sentient being is one who /wants/ things. A consumer.
>
>How is this different from an AI that "pretends" to want things? ;)
The difference is that you can tell your non-sentient AI to *stop* pretending
to want things, and it will do so.
>>Maybe art shouldn't even be done for economic reasons! Perhaps art should
>>be the province of non-workers who make their living by other means, so they
>>can indulge their creative urges without worrying about the bottom line.
>
>Whoever is feeding the artist has to worry about the bottom line.
Not if the artist is getting his income from some other source.
Thought experiment: Imagine you are H. Ross Perot. Imagine that your many
investments earn you well over a million dollars PER DAY. You decide to spend
your time composing music, because you like it. There is no reason for you to
care whether your songs will sell or not. They aren't going to put bread on
your table: it's simply not a concern.
>>There are certainly other ways they can do that, rather than by working.
>
>Such as? Some economists DEFINE the creation of wealth as economic "work."
The only definition of work that I care about is "the sale of labor
services". There are other economic resources you can sell, besides labor.
To wit: land, capital, and entrepreneurial ability. I'm particularly
interested in people selling capital, since capital is what's displacing labor
in the marketplace.
>What will they invest without wealth? Where will they get the wealth if
>they're not working?
They'll just have to get help from somebody. That might mean public
assistance, or it might mean some other way. Every society through history
has faced the problem of what to do with people who are unable (if only
temporarily) to contribute to the economy. It's a problem that predates
civilization: even Neanderthals cared for their infirm. I don't have any
magical answers to that ancient problem. But every society seems to come up
with some way to muddle through it.
Today we have Social Security. I personally don't like it. It's
redistribution of wealth by brute force, blatant socialist social
engineering that offends my libertarian sensibilities. Still, it seems to be
a popular program, and it has been going for decades. If nothing else works,
the coming economic dislocations can be addressed with heavy-handed government
intervention: tax the wealthy and pay "seed money" to the poor. That's
exactly what it will come down to, unless somebody works out a better plan.
Unpleasant as such a system might seem, I think it would be preferable to
creating make-work and a population explosion of sentient robots.
>You think managing finite resources is fun?
It's not the sale of labor services. Besides, a lot of folks seem to think
managing resources is fun. Witness all the computer games with such a theme:
Civilization, The Settlers, Sim City, etc.
-- Tony Belding http://hamilton.htcomp.net/tbelding/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:03 MST