On Tuesday 21 August 2001 07:35 am, you wrote:
> Eliezer S. Yudkowsky wrote,
>
> > That said, their AI theory most certainly appears to suck, and
> > I would put their chance of passing the Turing test in ten
> > years at zero, unless the organization shifts to a completely
>...
>
> I was disturbed by their examples, as well. They claimed that
> they were nonsensical in the sense of a child's ramblings. That
> ...
> it can't eat, how does it "like" them. This does not compare to
> a child who likes bananas. I found the examples to be more
> counter-examples of AI. (Unfortunately!)
>
> --
> Harvey Newstrom <http://HarveyNewstrom.com> <http://Newstaff.com>
Personally given the emphasis that they appear to be putting on
punishment as an instrument of control, and the personality that
appears to be present in the man in charge of the project, I think
it may be just as well if this isn't an extremely successful
project. This appears to be a "Spare the rod and spoil the child"
kind of theory. Not the way to raise someone that you expect to
like you later.
In addition, the name of Hal, while predictable, isn't the kind of
choice that one would want the first successful AI to be saddled
with. Too many bad, and sinister, connotations. Since, to the
extent that it was possible to determine, this appears to be some
sort of neural network program, that could be expected to yield
unpleasant results. But if this one isn't successful, perhaps it
will forestall the use of the name by someone else.
-- Charles Hixson Copy software legally, the GNU way! Use GNU software, and legally make and share copies of software. See: http://www.gnu.org http://www.redhat.com http://www.linux-mandrake.com http://www.calderasystems.com http://www.linuxapps.com
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:12 MDT