From: ps udoname (ps.udoname@gmail.com)
Date: Fri Sep 08 2006 - 12:56:11 MDT
Hey everyone,
I hear that there is turmoil in this list recinatly with people getting
kicked off for critisizing FAI. Which makes this an interesting time for me
to join especially as I think FAI might be a bad way to approach things, for
the following reasons (I realise they have probably been raised before):
1) I think Roger Penrose's idear that the mind relys somehow on quantum
effects could be correct, and if they are this makes an AI on a classical
computer impractical.
2) Assuming AI on a classical computer is possible, do you actually know
what ethics to give the AI? Utilitarianism would end with humanity being
turned to computronium, not in the sence of being uploaded but in the sence
of being replaced with somthing happier.
3) Assuming you have the ethical theory sorted, it seems to me (not that I
know much about this) that programming seed AI might be quite easy with
brute force, but programming FAI is increadbly hard.
4) Even if FAI is the best idear, why singinst and not univercity AI
reaserch?
Instead, I think brain-computer interfacing might be a better idear. AI
attempts should not give the AI direct control over anything, and the AI
should be asked how to bootstrap humans. A big off button would also be a
good idear.
I could say more about myself, but I might as well do the questionaire
1) Age:
20ish (this way the post will be accurate for years)
2) Sex:
male
3) Highest education attained:
> o High School
> o Bachelors
> o Masters
> o Doctorate
4) Occupation:
> 5) Subject of formal work/study:
> 6) Subject of informal/independent work/study (if applicable):
I'm doing a maths degree.
-Programming-
>
> 7) a) Computer programming experience/ability
Useless. I can't debug.
-SL4-
>
> 8) Gateway to SL4 (e.g. technical, social, ethical/political):
technical
9) When do you expect the "singularity" to occur? Pick a specific
> year, if your expectation is a range of years then respond with the
> average of the range (or, the year you think is most probable)
2030 - could vary a lot
10) How hard a take-off to you "expect"? Please characterise the
> duration between clearly-below-human AI and
> clearly-above-current-human intelligence.
Clearly below human in all respects?
Once there is superhuman AI and nanotech, days max till everyone is
uploaded.
11) What do you expect the greatest existential threat will be before
> the singularity?
Biowarfare perhaps.
12) How often do you contribute to the SL4 list?
> 0 (pure lurker) - 5 (reply to every topic several times).
> 13) Ever donated materially to SIAI?
newbe
-Misc-
>
> 14) Where are you on the political compass?
> (If you are unsure there is a questionaire here:
> http://politicalcompass.org/questionnaire )
> Y-axis (Libertarian=-2 Centre=0 Authoritarian=2) ?
> X-axis (Left=-2 Centre=0 Right=2) ?
Y-axis -2
X-axis -1 to 2
15) Are you an avid sci-fi fan?
yes
16) Vegan / Vegetarian?
no
17) Do you have significant Ashkenazi genetic heritage?
Why is this important?
Thanks for completing the questionaire. Please remember to send the
> results to joel.pitt@gmail.com and not to the SL4 list.
Opps.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT