summaryrefslogtreecommitdiff
path: root/web/ufHelp/Q6.html
blob: 5ddf0b2320d0cd8d513f7c0cba97c07b8978d8f1 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
<head>
<script>
	function loadDistribution(name, mean, stdDev) {
			var args = 
					{
							"caption": "Loaded our interpretation of " + name + "\'s probability distribution.",
							"Q6.mean": mean,
							"Q6.stdDev": stdDev
					};
			top.loadData(args);
	}
</script>
</head>
<body>
<P CLASS="western" STYLE="margin-bottom: 0in">Now that we have your
probability distribution for when neuromorphic AI will be created,
the next step is to pick a probability distribution for the creation
of non-neuromorphic human-level AI &mdash; that is, human-level AI
designed either by implementing a theory of intelligence that works
or by reverse-engineering the brain at some broad level, rather than
by just directly copying the brain in as much detail as possible and
running it as software. Once again, assume no major disruptions to business as usual.</P>
<UL>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
	<B>Claim:</B> &quot;Am
	I disappointed by the amount of progress in cognitive science and AI
	in the past 30 years or so? Not at all. To the contrary, I would
	have been extremely upset if we had come anywhere close to reaching
	human intelligence &mdash; it would have made me fear that our minds and
	souls were not deep. Reaching the goal of AI in just a few decades
	would have made me dramatically lose respect for humanity, and I
	certainly don't want (and never wanted) that to happen. Do I still
	believe it will happen someday? I can't say for sure, but I suppose
	it will eventually, yes. I wouldn't want to be around then, though.
	Indeed, I am very glad that we still have a very very long ways to
	go in our quest for AI.&quot;<BR>
	<B>Implication:</B> Human level AI is not
	likely to be developed in the next 60 years.
	<input type="button" onclick="loadDistribution('Hofstadter', 2.17, 0.1);" value="Load distribution"</input><BR>
	<B>Source:</B> Hofstadter, Douglas R. &quot;An Interview with Douglas R.
	Hofstadter, following 'I am a Strange Loop'&quot; Tal Cohen's
	Bookshelf. 11 June 2008. Retrieved 9 Aug. 2008
	&lt;<FONT COLOR="#000080"><U><A TARGET="_blank" CLASS="western" HREF="http://tal.forum2.org/hofstadter_interview" TARGET="_blank">http://tal.forum2.org/hofstadter_interview</A></U></FONT>&gt;.</P>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
	<B>Claim:
	</B>&quot;Fundamental conceptual advances are required to
	reach human level AI. Maybe we'll have it in five years, maybe it
	will take 500 years, although I doubt it will take that
	long.&quot;<BR>
	<B>Implication: </B>Because uncertainty is so great,
	we should have wide confidence bounds, but not too wide.
	<input type="button" onclick="loadDistribution('McCarthy', 2.30, 0.2);" value="Load distribution"</input><BR>
	<B>Source: </B>John McCarthy. (2002). &quot;Forrest Sawyer, John McCarthy
	respond to Ray Kurzweil; Kurzweil answers McCarthy&quot;. The
	Reality Club: Ray Kurzweil: The Singularity.
	&lt;<U><A TARGET="_blank" CLASS="western" HREF="http://www.edge.org/discourse/singularity.html">http://www.edge.org/discourse/singularity.html</A></U>&gt;
		</P>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
	<B>Claim:</B>
	&quot;Fully intelligent robots before 2050&quot;<BR>
	<B>Implication:</B>
	The probability for non-neuromorphic human-level AI builds up until 2050.
	<input type="button" onclick="loadDistribution('McCarthy', 1.954, 0.05);" value="Load distribution"</input><BR>
	<B>Source:</B> Moravec, Hans.
	<U><A TARGET="_blank" CLASS="western" HREF="http://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187">Mind Children: The Future of Robot and Human Intelligence</A></U>.
	New York: Harvard UP, 1990.</P>
</UL>
<UL>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
	<B>Claim:</B>
	Extrapolations from the visual computation of the retina and Deep
	Blue/Kasparov put human brain capacity at around 100 TFLOPS (10<SUP><FONT SIZE=2 STYLE="font-size: 9pt">14</FONT></SUP>
	floating operations per second), about ten times slower than the
	fastest supercomputer in 2008, IBM's Roadrunner, and about 400 times
	faster than a PlayStation 3.<BR>
	<B>Implication:</B> We would already
	be able to functionally simulate the human brain on today's
	supercomputers, if we had the necessary knowledge.
	<input type="button" onclick="loadDistribution('Moravec', 1.845, 0.05);" value="Load distribution"</input><BR>
	<B>Source:</B>
	Moravec, Hans. &quot;When will computer hardware match the human
	brain?&quot;<BR><I>Journal of Evolution and Technology</I> 1 (1998):
	1-12. &lt;<FONT COLOR="#000080"><U><A TARGET="_blank" CLASS="western" HREF="http://www.transhumanist.com/volume1/moravec.htm" TARGET="_blank">http://www.transhumanist.com/volume1/moravec.htm</A></U></FONT>&gt;.
		</P>
</UL>
<UL>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
	<B>Claim:</B>
	&quot;This paper outlines the
	case for believing that we will have superhuman artificial
	intelligence within the first third of the next century. I
	would all-things-considered assign less than a 50% probability to
	superintelligence being developed by 2033. I do think there is great
	uncertainty about whether and when it might happen, and that one
	should take seriously the possibility that it might happen by then,
	because of the kinds of consideration outlined in this
	paper.&quot;<BR>
	<B>Implication:</B> There is a substantial, but less
	than 50% chance that human-level AGI will be developed by
	2033.
	<input type="button" onclick="loadDistribution('Bostrom', 2.0, 0.2);" value="Load distribution"</input><BR>
	<B>Source:</B> Bostrom, Nick. &quot;How long before
	superintelligence?&quot; <I>International Journal of Future Studies</I>
	2 (1998): 1-13. 8 Aug 2008
	&lt;<U><A TARGET="_blank" CLASS="western" HREF="http://www.nickbostrom.com/superintelligence.html" TARGET="_blank">http://www.nickbostrom.com/superintelligence.html</A></U>&gt;.</P>
	<LI><P CLASS="western">
	<B>Claim:</B> Non-neuromorphic AI will be built before neuromorphic AI,
	because it's easier to build AI based on understanding the general
	principles of the brain (like how the neocortex works) than
	painstakingly copying all its detail.<BR>
	<B>Implication:</B> Non-neuromorphic AI will come before neuromorphic AI.
	<input type="button" onclick="loadDistribution('Hawkins', 2.0, 0.1);" value="Load distribution"</input><BR>
	<B>Source:</B> Hawkins, Jeff. (2004). <U><A TARGET="_blank" CLASS="western" HREF="http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/0805078533/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1242196282&amp;sr=1-1">On
	Intelligence</A></U>. Times Books.</P>
	<LI><P CLASS="western" STYLE="margin-bottom: 0in">
		<B>Claim: </B>In
	the same way it was easier to build an airplane (abstract
	implementation of flight) than an artificial bird (precise prior
	implementation of flight), it will be easier to build AI based on a
	theory of intelligence (non-neuromorphic) than copying all the
	complexity of the brain (neuromorphic).<BR>
		<B>Implication: </B>Whenever
	neuromorphic AI arrives, non-neuromorphic AI is likely to be
	completed first.<BR> 
		<B>Source: </B>Various.<BR><BR>
	</P>
</UL>
<!--<P CLASS="western" STYLE="margin-bottom: 0in">Next we take a look at
the influence of other relevant factors on the probability that AI
actually gets built this century. Essentially, we look at one main
factor that would <B>prevent</B> human-level AI -- <B>large global
catastrophes</B> -- and another factor that would <B>encourage</B>
human-level AI, successful <B>human intelligence enhancement</B>.
</P>-->
</body>