From: Robin Hanson (rhanson@gmu.edu)
Date: Sat Nov 10 2007 - 13:40:34 MST
From a decision-theory perspective, the odds of AGI would have to be incredibly small to justify the current low level of Friendly AI funding.
You've probably heard the common arguments for AGI, mostly it's about debunking counter-arguments to AGI at this point.
1. It's already been pointed out that the track-record of human invention to match or outdo evolution, when "compactness" is not a criterion, is very good. You've heard the flight analogy, allegedly many experts were surprised by the Wright Brothers.
2. When invention matches or exceeds evolution, it's usually sudden.
3. Adjust for overconfidence bias, if an expert says 95% confidence that AGI won't happen, then it's probably less than that, unless it's part of a larger well-calibrated model (which it isn't).
4. Some people's algorithm seems to be, "if it hasn't happened in the last X years, then surely it won't happen in the next X years." This is a *terrible* algorithm. ...
5. Another poor algorithm: "If someone predicts X will happen in 50 years, and it doesn't happen, then that means it will surely never happen." ...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT