I haven't read the paper, but my interpretation of the abstract is that
instead of just extrapolating average lifespans, they looked at the
mortality rate at each age and extrapolated that. So they looked at
the mortality rate for, say 70 year olds (i.e. how many 70 year olds
die in a year) for 1950, 1960, 1970, and so on, and extrapolated into
the 21st century. They do this for each age bracket.
The result is an extrapolated mortality profile for any year in the
future. They can then use this to calculate the "expected lifetime" using
the convention of applying the mortality rates associated with that year.
Apparently this technique produces a smaller expected lifespan than just
extrapolating average lifespans themselves, but has less uncertainty.
They also noted that their predictions are for larger lifetimes than
the official government predictions, but it's not clear what method the
government uses.
Their result of a 10 year increase over 70 some years sounds pretty low.
I posted some of the historical data a few weeks ago showing about .1 to
.3 years of lifespan improvement per year, and their results would be at
the low end of that range. And of course if we see some technological
revolutions that completely change our understanding of aging, things
could change dramatically.
One thing they could do with this technique (it's not clear if they are
doing this) is to extrapolate the mortality rates forward and answer
the question, what is the expected lifetime of a person who is X years
old today? They would use the mortality for X-year-olds in 2000, for
(X+10)-year-olds in 2010, for (X+20)-year-olds in 2020, and so on.
This would be a more meaningful number in terms of an actual prediction
for how long people can expect to live, again assuming steady growth.
Hal
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:06:38 MDT