From: Samantha Atkins (samantha@objectent.com)
Date: Sat Mar 30 2002 - 02:54:54 MST
Dan Clemmensen wrote:
> A short nuclear war, maybe. long endless wars tend to force certain
> technologies. Eugene listed his requirements for halting the
> singularity: They include police-state-level control of all software
> development. I don't think that this is even feasible any more.
>
>
Probably not feasible in the sense of enforceable, but when has
this stopped the US government from trying anyhow? The War on
Drugs, the War on Poverty, the War on Terror - all ultimately
infeasible. Sometimes I am paranoid enough to foresee requiring
the registration of compilers and debuggers much like guns today.
>>> In my opinion, the correct extropian evaluation uses a very sharp
>>> discount rate, because we know that technology will advance much faster
>>> than most analysts believe. Therefore, we should favor energy generation
>>
>>
>> We actually "know" no such thing. Look how easy it was to stop a lot
>> of high-tech startups in their tracks with the one-two punch of the
>> tech slump and 9/11 and the additional blow of Bushite nonsense.
>> Technology is not just technology, it involves people, politics and
>> business.
>
>
>
> Please specify how the dot.coms were contributing to the advancement of
> technology? The meltdown affected mostly dot.coms and certain
As mentioned in another post, I am not referring only to the
dot.coms. It was not they who were the cause of the economic
meltdown and it is certainly not only they who were drastically
curtailed and existing businesses ruined. Today it is still
extremely difficult to get any investment money for any
technology venture, seed capital or continued financing. The VC
and investors are very skittish across the board.
> telecommunications sectors. The dot.coms were mostly worthless. The
> telecomms that failed were all spending VC money, with five or ten
Those were simply the largest technological segments affected.
Most high technology was hit to some degree. The dot coms were
not as worthless as they are often portrayed. Many quite strong
and worthwhile companies died along with the rest or are barely
surviving. Quite a few had interesting technological edges that
could have made a difference in what was available for the rest
of us. Dot coms got caught up in the super hype but I don't
believe they created it or that the crash was altogether or even
mainly a healthy thing. The telecomms were not just spending VC
money, nor were some of the "dot.coms".
> companies for each new niche product. The consolidation when more than
> half of these companies failed did not affect technological progress of
> the other half except perhaps positively as the design teams
> consolidated. I worked for IPOptical until we ran out of cash: now I
Do you live in the same software world that I work in? It is
damn cold out here.
> work for HyperChip. IPOptical was a bit behind and was depending on
> outside hardware that the chip makers failed to deliver (overambitious.)
> Hyperchip built their own hardware but could not hire enough programmers
> because of the insane bubble. The bubble burst, HyperChip hired a bunch
> of us, and we'll deliver a massively scalable IP router this year.
>
I wish them luck. I know other similar companies and projects
with good solid people, technology and management that ate it.
>
>>> but is a direct consequence in my rational analysis that the singularity
>>> is highly likely to occur before 2020.
>>
>>
>> I do not think this is highly likely any more. The world is a good
>> deal different than it was a couple of years ago and not altogether
>> for the good of such a prediction. I believe Singularity is possible
>> by 2020 but not so likely. If it does come before 2020 I would expect
>> it to come from nanotech advances first rather than AI. I don't
>> believe we have much of a workable plan for producing a SI that soon yet.
>
>
>
> We have no plan. Well, some folks have some plans, but my analysis does
> not depend on any particular plan. Rather, I feel that the increasing
> amount of computational horsepower means that the brilliance needed in
> the software design decreases over time. When a clever designer finally
That is an interesting idea but after over two decades in
software I really don't believe it. The radical increase in
computational horsepower has not resulted in a proportional
increase in the quality, capabilities or performance profiles of
software. If anything, the increasing horsepower makes bad
software designs and tools more obvious as the bloat as well as
the spectacular blowouts are magnified and yet more often seen
as "acceptable". Self-programming systems may well be acheived
but not without some well designed seeds being built along with
some excellent tools to monitor and tweak the beginnings of the
process. In the meantime, non-self programming software hasn't
received any significant advances in programmer tools and
abilities in over a decade. I don't know how we are to develop
good augmentation wares without several serious advances in
programming practice, human-computer interaction and other areas
not exactly accelerating at this time.
> does create the SI, we will see that in retrospect a truly brilliant
> design would have worked on hardware available several years earlier and
> that it is likely that an SI is achievable on today's available
> computing power.
>
At today's level I am not convinced we have the human
programming tools and techniques sufficient to build the seed of
an SI. I am not even convinced we have enough to address the
conventional programming backlog. A lot of software people are
also out of work despite the huge needs that have not decreased
or been addressed after the tech meltdown. A lot of us who are
employed are fighting to keep our companies afloat with far too
few remaining staff and find ourselves with little time for
innovation.
>
>>
>>> This does not mean that I'm
>>> waiting for the tooth fairy to bring about the singularity. It does mean
>>> that I'm no longer interested in the power debate.
>>
>>
>> Why? Because the SI (supposedly) will figure it all out for us as
>> soon as it kicks in? This seems very dangerous to me.
>>
>
>
> The SI will either be benign or not. If benign, It will indeed "figure
> it all out." But an SI may very well be human-computer collaboration,
> or may incorporate humans in some way.
>
I think that is likely for the short term. Although I would not
expect it to reach SI levels until the humans have uploaded.
> What part of this seems dangerous? I think a human race without SI is
> increasingly dangerous. We've somehow managed to avoid nuclear war
> so far. but nanotech and some forms of biotech look threats that may
> be beyond practical human control. The lesson of 9/11 is that society
> makes increasing power (in the physics sense of the word) available for
> any determined individual to use. This will accelerate.
>
It seems dangerous to me to postpone dealing with problems today
because they will be dealt with better in some hypothetical
future state. I have no argument that human race plus SI is
much more survivable and an immensely improved state. My only
beef is with too much reliance on the SI to solve things BEFORE
it exists or leaning too far in assuming SI in x years. I
certainly find myself doing that sometimes.
- samantha
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:08 MST