From: Jake Witmer (jake.alfg@yahoo.com)
Date: Tue Jun 21 2011 - 04:15:10 MDT
I address your points in [bracketed blue bold below].
Jake Witmer
312.730.4037skype: jake.witmer
Y!chat: jake.alfg
"The most dangerous man to any government is the man who is able to think things
out for himself, without regard to the prevailing superstitions and taboos. Almost
inevitably, he comes to the conclusion that the government he lives under is
dishonest, insane, and intolerable." -H. L. Mencken
"Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine could design even better
machines; there would then unquestionably be an 'intelligence explosion,' and the
intelligence of man would be left far behind. Thus the first ultraintelligent machine is
the last invention that man need ever make."
-I. J. Good
--- On Tue, 6/21/11, DataPacRat <datapacrat@gmail.com> wrote:
From: DataPacRat <datapacrat@gmail.com>
Subject: [sl4] Friendly AIs vs Friendly Humans
To: sl4@sl4.org
Date: Tuesday, June 21, 2011, 6:36 AM
Since this list isn't officially closed down /quite/ yet, I'm hoping
to take advantage of the remaining readers' insights to help me find
the answer to a certain question - or, at least, help me find where
the answer already is.
My understanding of the Friendly AI problem is, roughly, that AIs
could have all sorts of goal systems, many of which are rather
unhealthy for humanity as we know it; and, due to the potential for
rapid self-improvement, once any AI exists, it is highly likely to
rapidly gain the power required to implement its goals whether we want
it to or not. Thus certain people are trying to develop the parameters
for a Friendly AI, one that will allow us humans to continue doing our
own things (or some approximation thereof), or at least for avoiding
the development of an Unfriendly AI.
>From what I've overheard, one of the biggest difficulties with FAI is
that there are a wide variety of possible forms of AI, making it
difficult to determine what it would take to ensure Friendliness for
any potential AI design. [I think this is an accurate view. To this end, I strongly recommend the book "On Intelligence" by Jeff Hawkins, if you have not already read it.]
Could anyone here suggest any references on a much narrower subset of
this problem: limiting the form of AI designs being considered to
human-like minds (possibly including actual emulations of human
minds), is it possible to solve the FAI problem for that subset - or,
put another way, instead of preventing Unfriendly AIs and allowing
only Friendly AIs, is it possible to avoid "Unfriendly Humans" and
encourage "Friendly Humans"?
[The basic goal of any legal system is to do this. Failure of a legal system (such as the United States legal system) produces increasing levels of human to human parasitism, and thus suffering. An extreme example of a completely bad (absent of morality) legal system were the Communist systems of the USSR and China. --Without protection of our bodies (including all subsets of them, to the atomic level) we have absolutely no freedom. This is difficult for the religious to comprehend, and that lack of understanding allows for attack by political sophistry and bigotry (perhaps AGI or strong AI will be immune to these failures of human thought, in the same way many libertarians have become immune to them --by education. Also, machines will only likely need to be educated once, as opposed to humans, which do not have stable and reproducible copies of their neo-cortical hierarchies.)]
If so, do such methods offer any insight
into the generalized FAI problem?
[I think so. The proper legal system of the USA is the jury-based system. By inserting random evaluations of suffering with the power to veto punishment, our otherwise sociopathic legal system acts with compassion. As the American jury-based legal system has been incrementally circumvented and destroyed (licensing of lawyers-1832, voir dire "jury selection"-1850, elimination of proper jury instruction-1895, elimination of 4th amendment defenses due to drug prohibition -1910, silencing of constitutional defense arguments-1960s), the system and the actors within it have acted in increasingly more sociopathic ways. For a detailing of the incremental destruction of the jury, please check out the website: http://www.fija.org and http://www.isil.org also very good, in showing how limits on the power to punish reduce the potential for harm, please check out http://www.hawaii.edu/powerkills ]
If not, does that imply that there
is no general FAI solution?
[There is no certain FAI solution, since overcoming tyranny requires conflict. If a system is a "friendly" helper of tyranny, it is "unfriendly." If a system is an antagonist to tyranny, it is hostile to most present humans, and benevolent to most future (uncorrupted, well-educated) humans. Implicit in this understanding is that most humans are as moral as the system of government they live under, with outliers being more or less moral than that system. In the USA, only small-L "libertarians" approach morality. ---All others vote to violate their own chosen morality while inside the polling place, but otherwise pretend to "love thy neighbor as thyself." A quick look at the U.S. legal system (legislative, prison, and court systems) proves that virtually noone in the USA "loves their neighbor as their self."]
And, most importantly, how many false assumptions are behind these
questions, and how can I best learn to correct them?
[Thinking about the issues, debating them, and asking questions to educated message boards is a good idea. It's more rational than the approach many people take. Reading overviews of AGI-related material is a good idea, as is investigating current projects whose goal is AGI.
Please understand that I've simply put in my .02 here. I am in no way "qualified" to speak about this subject, except as someone who cares about it, and is self-educated about it. I hold no degree in computer science, much less an advanced one. But I think I have a pretty honest bullshit filter, and this is something that most people --even ones with advanced computer science degrees-- completely lack.]
Thank you for your time,
-- DataPacRat lu .iacu'i ma krinu lo du'u .ei mi krici la'e di'u li'u traji lo ka vajni fo lo preti
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT