[p2p-research] Singularity > Climate Change > Peak Oil > Financial Crisis
Ryan
rlanham1963 at gmail.com
Tue May 25 03:24:40 CEST 2010
That same argument keeps being rewritten over and over again...
Sent to you by Ryan via Google Reader: Singularity > Climate Change >
Peak Oil > Financial Crisis via Early Warning by Stuart Staniford on
5/20/10
While lying awake late at night worrying about what kind of world my
children will inherit, I find it helpful to come up with schemas for
the most obvious and inevitable of the large societal problems. It
makes them seem slightly more manageable to place them in order of
importance, or time. Further, being clear on what are the biggest and
most important problems is an essential prerequisite to thinking about
solutions: these problems all interact, and solutions to the smaller of
them may not be radical enough to address the larger of them.
In this post, I would like to argue for the above ordering of problems.
I mean the '>' symbol in two senses: "A > B" meaning both "The main
impact of A will fall later in time than the main impact of B", and
also "A is a more serious and fundamental threat to humanity than B".
While a full explication of the arguments would occupy a number of
books, today you are going to have to make do with a single measly blog
post, albeit longer than usual.
Peak Oil > Financial Crisis
I think it's indisputable that we are in the middle of the financial
crisis right now. We had the main private sector crisis in 2008, now we
are finding out which sovereign governments are going to end up needing
to default, and perhaps that will induce a second wave of private
sector defaults. By contrast, I think the arguments for a very
near-term peak in oil supply have started to look quite weak. Firstly,
work on the expansion of Iraqi oil supply continues apace. But even if
Iraqi production couldn't expand much, it's likely the 2008 peak of oil
supply will be exceeded.
However, I've never been able to get too terribly excited about the
financial crisis as a massive long-term threat to humanity. At the end
of the day, debt is not a physical quantity. The fact that humanity,
collectively, has written too many debt instruments means that we have
been too optimistic about the future, and created more promises than
can be actually serviced. However, that doesn't create any fundamental
physical constraint on our activities: it just means that the excessive
promises need to be renegotiated to be more in line with the our actual
future capabilities. This process will be painful and difficult in the
short term, but I can't see how it poses any fundamental difficulty to
the continued operation of civilization. There have been financial
crises and sovereign debt defaults for many centuries, and we have
survived them: we will very likely survive this one too.
By contrast, peak oil, when it does come, represents a significant
physical constraint, and will require a large scale transformation of a
number of important infrastructure elements, one way or another, over
the course of a few decades. It's also an unprecedented situation -
nobody has written papers about the many previous oil peaks over the
centuries. Since infrastructures like transportation are critical to
the operation of society, if we were to blow the handling of peak oil,
it could be quite dangerous.
Thus I argue that peak oil is both probably later than the financial
crisis, but ultimately more important and threatening.
Climate Change > Peak Oil
At the same time, I don't believe peak oil by itself is really a mortal
threat, unless we seriously mishandle it. Fundamentally, our society is
so wealthy and creates such a large economic surplus relative to our
actual bodily needs that we have tremendous amounts of waste and
potential for conservation. We can perfectly well get to work, or the
grocery store, on scooters instead of in SUVs if we have to, or
electric hybrid bicycles if it comes to that. We can take fewer foreign
vacations, and we can video-conference instead of flying off on
business trips. We may not want to, we may bitch and moan, but it's
certainly possible to do these things rather than experience the end of
the world. Fundamentally, what the post-peak period requires is that we
get a few percent more efficient in our oil usage each year. We've
already proven our ability to do that in the late seventies and early
eighties.
So the real threat of peak oil is that we'll refuse to accept what
needs to be done, and instead start fighting resource wars and the like
(see Cheney, Dick, Iraq 2003). Converting ever larger fractions of our
food supply to fuel is also a spectacularly dumb approach that we have
a choice whether or not to do. The issue is thus more
social/political/cultural than it is technological.
However, the great thing about peak oil, especially compared to climate
change, is that when it occurs (and even in the run-up as we saw in
2005-2007) it will call attention to itself in an impossible-to-ignore
way. When you have not quite enough oil, the price goes through the
roof, and everyone immediately realizes that there is a major
not-quite-enough oil problem, and begins thinking in terms of smaller
cars, hybrid cars, skipping the family vacation this year, etc. The
price feedback is immediate and relentless until enough conservation
has been achieved for however much oil is available this particular
year. And the same thing will be true all the way down - if at some
point we need to transition from gas scooters to electric scooters,
there will $40/gallon gas pointing out to us the immediate urgency of
that step.
By contrast, climate change lacks the immediate personal feedback. We
all pump the greenhouse gases up into the air, and there is no
immediate consequence to us - no equivalent to paying a high price at
the fuel pump. Instead, the gases gradually accumulate and the climate
slowly warms, and the oceans slowly warm, and over the decades, the
floods, the deadly heatwaves, the hurricanes, the forest fires, the
disease outbreaks, the coastal cliff collapses, the species extinctions
and bizarre ecological catastrophes all get more and more common and
more and more serious. But because both the climate system and the
major infrastructure in the economy have decades and decades of lag
built in, by the time the problem is so incontrovertible that everyone
can agree on its true urgency, it will be very late, and we will
already be locked into an enormous amount of damage. Thus ultimately, I
think climate change poses the more serious threat because it doesn't
map so naturally to the way human incentive structures work.
And by the same token, I think the bulk of the climate change problem
will occur later than peak oil. I think it's pretty much inconceivable
that peak oil will occur later than around 2030, but climate change
will just really be getting going then, and will gradually turn into a
more and more furious rolling catastrophe, with the worst impacts in
the second half of this century, and getting more and more severe until
we finally get the situation under control (or succumb to it).
Singularity > Climate Change
But still, as enormous as the transformation required by climate change
is, I don't think it's the biggest monster under the bed. True, most of
our industrial society has been powered by fossil fuels for a couple of
centuries, and it's all we've ever known as individuals. But perfectly
good high-net-energy alternatives are known in modern wind, solar, and
nuclear, and there's no in-principle reason we can't transition to them
provided we make a serious all-out effort.
And the good thing is, although it's the largest such transition we've
ever attempted, it's fundamentally a kind of challenge that we in
Western Civilization recognize. We've created a problem, and the
solution requires technological innovation. We need better renewables,
smarter grids, more efficient electric cars. This stuff is fun for
techies to think about, and Silicon Valley is already on the job with
massive investment in clean-tech in the last five years. A defining
feature of western civilization is that we value innovation highly. The
patent was invented and became widespread in medieval Europe; with one
minor exception for the Greek city state of Sybaris, there is no
evidence of earlier civilizations using patents. Braudel, in comparing
medieval European civilization to other world civilizations at the time
points out that Europe was the only place with a clear concept of
fashion. Other civilizations simply didn't have the concept that it was
valuable to dress or decorate in a new and different way than your
forebears. So while other civilizations have certainly created many
innovations, none has institutionalized and glorified the innovation
process in the way that ours has. And to this day, across the
contemporary political spectrum, innovation is valued and seen as the
way to solve all our problems.
However, the problem that we are gradually innovating more and more of
our fellow citizens out of jobs, and thus out of any meaningful stake
in our society, is not one that obviously lends itself to innovation.
In that case, innovation looks to me like the problem, not the solution.
I have not yet talked much about the singularity on this blog, so let
me briefly recap the main ideas of the techno-optimist computer
scientists who have mostly thought about this, and then give my own
take.
The basic idea was first formulated by computer scientist and science
fiction author Vernor Vinge, but in recent years the most visible
exponent has been computer scientist and inventor Ray Kurzweil. I was
first introduced to these ideas via the latter's book The Singularity
is Near, which I take to be still a reasonably current exposition of
Kurzweil's thinking.
The argument goes like this:
- Throughout pre-history and history, the rate of evolution has
accelerated. Physical evolution proceeds slowly at inventing new forms,
once humanity came on the scene prehistoric cultural evolution
proceeded faster, then with writing faster still, then a globally
integrated society, now Internet connected, has innovated faster and
faster still. The rate of change is now breathtaking.
- In particular, computers are now getter faster and faster and more
capable. The rate of physical computation speed is increasing
exponentially according to Moore's law, in which the amount of
computation that can be done by one chip doubles every two years
(roughly stated). There are no major physical limitations that will
prevent Moore's law continuing decades into the future.
- Computers are now slowly getting smarter and smarter as we solve
particular problems one at a time. Eg, the voice recognition problem is
slowly coming under control, face recognition is starting to be a
commercial proposition, driving a vehicle is now at the
pretty-decent-research-prototype stage, etc.
- By the mid 2030s, available computers produced each year will exceed
the information processing capacity of the entire human population, and
by the mid 2040s it will exceed it by a factor of a billion.
- With ever-increasing progress in neuroscience, we will be able by
then to reverse-engineer the human brain, and use computers to create
general human-quality intelligence, except it will be able to go faster
and faster than the basic version that biology has managed to produce
to date.
- Once machine intelligence surpasses human intelligence by a
sufficient margin, it will continue to accelerate the rate of
intellectual and technical progress at rates massively higher than at
present, and this acceleration will continue at ever greater rates. The
name singularity comes from the idea of a comparison with a black hole
where there is an event horizon beyond which one cannot see. Once
ever-accelerating machine intelligence takes over, things will happen
so fast and furiously that we today have no idea what will happen and
cannot predict the consequences.
- Nonetheless, you shouldn't worry about any of this, because we will
be able to use all this amazing technology to augment and transform
human consciousness. We will be able to reverse engineer how we work,
modify and improve on it, download ourselves into new/better/synthetic
bodies, cure world hunger, etc, etc, etc... So life will be better
overall.Obviously, in compressing an entire long book into seven bullet
points, I am leaving out a variety of nuances and supporting lines of
evidence, which readers are welcome to explore on their own, or we can
discuss further on this blog in future. My overall reaction to the book
was to first feel incredibly depressed for a few weeks, and then spend
a number of months in research gradually sorting out which parts I
believed and which I didn't. Herewith my take:
- (Accelerating technical progress). I think, with some caveats, this
is broadly qualitatively true. Our society does innovate faster, than,
say the Ubaid proto-civilization, and in turn, the Ubaid were
undoubtedly innovating faster than Homo Erectus. However, you can, and
Kurzweil does, overstate the issue. Probably the best metric of overall
rate of technical progress in the industrial era is growth in the
global world product, and that has been a few percent a year throughout
that time - it is not dramatically accelerating recently, though
industrial growth is faster than pre-industrial growth.
- (Moore's Law). The historical rate of progress in computer speed is
incontrovertible and general technical consensus is that there is no
near term barrier to it continuing (eg see this book for a summary of
the issues). I don't see a reasonable basis for questioning this,
though of course future events are always somewhat uncertain.
- (Computers getting smarter). This is incontrovertibly true. While the
speed with which this would happen was overstated in the early years of
AI, it is the case that with each passing decade more and more things
traditionally requiring human expertise can now be done by computers
(chess, voice recognition, driving, etc).
- (Available computer power will exceed human information processing
capability in 25 years or so). As a statement of raw computing power,
this seems plausible, though of course the further we try to
extrapolate Moore's law, the less certain the result.
- (Ability to reverse engineer human intelligence and produce a
synthetic equivalent also within about 25 years). Here I think Kurzweil
gets onto thinner and thinner ice. At the moment, we computer
scientists have no real idea how to produce anything like the
generality and flexibility of human reasoning, so it's going to take
major conceptual breakthroughs before we are able to do so. So I think
the time estimates here get much less certain than he indicates - raw
computation power alone doesn't make for a general flexible
intelligence, and fundamental breakthroughs are hard/impossible to
predict in advance. At the same time, it does seem reasonable to
project that the general direction at present will continue, with more
and more tasks being automatable, and as neuroscience and cognitive
science and computer science all continue, it seems likely there will
be major progress in these areas. However, I think the timing is much
harder to project.
- (Further acceleration and singularity). The further out we go, the
more speculative it all gets. However, I don't dispute the general
proposition that machine intelligence comparable to or better than
human intelligence will be a complete game-changer and things will
never be the same again and we can't predict what the world would look
like afterwards. That much seems elementary.
- (This will be good for the average human). Here I found Kurzweil's
reasoning very fantastical... In particular, I think he seriously
neglects the fact that medical progress is far slower than computer
progress and there are excellent unavoidable reasons for this: new
medical technologies require massive clinical trials to establish their
safety and efficacy, while new generations of chips and computers
don't. Therefore, progress on silicon intelligence will continue to be
much faster than progress on biological intelligence. This might not be
so good for many of the biological types...The choice of "singularity"
as metaphor for this all this is interesting, I think. Falling into a
black hole would not normally be considered a pleasant or desirable
prospect, but this was the metaphor picked by people who think this
will be a good thing.
I think the key issue to analyzing the goodness, or otherwise, of these
trends is to look at why/how technologies get adopted. I've spent the
last decade working in technology startups building and selling new
computer technologies, so I think I have a decent understanding of
this. There are basically two main selling points to any new technology:
1) This is whizz-bang cool and will enhance your social status to own.
and then, either:2a) This will enhance your productivity and allow you
to do more/better of whatever it is you do in less time (for a
utilitarian or business proposition).2b) This will be fun to do (for a
consumer/entertainment proposition)
Usually points 1) and either 2a) or 2b) points are required to close
the sale. Technologies do not get adopted because they generally
improve human welfare, they get adopted because they work on these
points for specific sets of customers. Thus corporate voice recognition
phone systems did not come around because all our lives are improved by
it, but because it allows companies to handle more customers with fewer
staff, thereby providing a return on investment to the project. If in
the future, we get automated vehicle driving systems, the reasons will
have nothing to do with overall human welfare, but with the specific
profitability equations of trucking firms, or taxi-firms.
Likewise, new consumer technologies do not get deployed to the benefit
of poor people, they get deployed to the benefit of people who can and
will pay for them (aka customers).
And this will continue to be the case in the future.
So in my view, the dominant social effect of all these
ever-accelerating-integrated-globalized-world-stuff is to create a
divide: if you are sufficiently smart/skilled/creative to stay employed
and take advantage of all the new toys, life gets better and more fun
and you feel more powerful and enabled. But if you are not sufficiently
smart/skilled/creative (or you just decide the game is fucked), then
you are out of a job and now you can't afford most of the fun stuff and
not only that, you live in the midst of a society that you know doesn't
have any use for you and doesn't value you.
And unfortunately, we have this:
and this:
So the fraction of people below the divide is growing steadily. Slowly,
but steadily. In fact, if you look at the trend, it's pretty close to
linear. Here is the employment/population ratio for men 25-54 again,
with a linear and quadratic fit. They barely differ:
Given the way technology is adopted by businesses, one of the effects
of a singularity that I would expect is that the employment-population
ratio would fall to zero. Machine intelligence, if equally good, will
always displace human intelligence from the workplace because it
doesn't insist on going home at night, doesn't have any human rights,
and can be replicated in arbitrary quantities (meaning it will cost
much less) and be customized to the task at hand. If we extend the
employment population ratio out to 2100, and extrapolate the trends, we
get this:
The orange circle shows Kurzweil's approximate singularity date with my
contention that it will be signalled economically by an
employment/population of zero. I don't find it plausible that that
curve is going to bend over that far, but maybe. I wouldn't necessarily
have a great deal of confidence in a linear or quadratic extension
either - it's hard to believe that another few decades of Moore's law
won't have some pretty massive effects on the economy, and on
employment in particular.
My best guess is that a singularity of some kind is probably coming,
but slower than folks like Kurzweil think. However, just as the
gravitational tidal forces from a black hole tear you apart some time
before you reach the singularity, I think the tidal forces of
increasing automation are already at work and will become gradually
more stressful over the coming century.
Finally, I note that there are complex interplays between these various
threats.
For example, consider the effects of peak oil in the context of
increasing automation. Reversalist thinkers like the Archdruid, or Jim
Kunstler, believe that one effect of peak oil will be to cause
replacement of machines by human labor again (on the theory that with
less energy, we'll need to use more human labor, just like we did back
in the day, when we had less energy before). However, this certainly
isn't evident in the data to date, and it's not at all obvious that it
follows. Consider, for example, the decision as to whether to lay off
the security guards, and replace them with security robots. This is a
decision that companies will take individually based on the economics
and quality of the security robots, but presumably over time, these
will improve and there will be more robots and fewer humans doing
physical security, and the employment-population ratio will drift a
little lower.
However, consider the implications for oil demand. An employed security
guard will be driving to work in his (or her!) pickup truck, and if you
pay him enough, he may even fly off for a yearly vacation, or to his
daughter's wedding on the other side of the country. If he's not
employed, he probably will use far less oil like that, and the robot
that replaces him will run on a little electricity and no oil.
So it's not at all clear that less oil implies less automation.
Similarly, addressing climate change by switching to renewable power
doesn't prevent continued automation either, because machines can be
fundamentally more efficient than humans at converting sunlight to work.
And so I think, in the longer term, the threat to most of the nine or
ten billion of us that there will soon be is that the economy simply
won't need us to operate. And if we address the financial crisis, solve
peak oil, and fix climate change with ever increasing amounts of
cleverness and innovation, we will probably simply hasten that day.
Things you can do from here:
- Subscribe to Early Warning using Google Reader
- Get started using Google Reader to easily keep up with all your
favorite sites
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listcultures.org/pipermail/p2presearch_listcultures.org/attachments/20100525/a93edf36/attachment-0001.html>
More information about the p2presearch
mailing list