From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 27 2000 - 11:11:33 MDT
Eugene Leitl writes,
> That's a double non sequitur. They're not friends, they're
> researchers. The second howler is, why should the SI be friendly if
> you're friendly towards it? (Of course you can't be friendly to an SI,
> almost by definition).
Researchers can also be friends, can't they?
AI s (not yet SIs) would want to be friendly to each other, because if they
weren't they'd kill each other (as you've already pointed out). As Homo sapiens
evolve into Robo sapiens (or transhumans), they'd choose their friends
intelligently, right? So, why would you want to be friends with an emerging AI
if it wasn't friendly? ("Kill it before it multiplies.")
--J. R.
Virtual Humans and Humanoid Robots
(ooooh! I'll bet it has a wicked back hand.)
http://www.cc.gatech.edu/fac/Chris.Atkeson/virtual-humans.html
http://www.erato.atr.co.jp/DB/pr.html
The Kawato Dynamic Brain Project, ERATO, JST introduces the HUMANOID ROBOT, a
dextrous anthropomorphic robot that has the same kinematic structure as the
human body with 30 active degrees of freedom (without fingers). We believe that
employing a HUMANOID ROBOT is the first step towards a complete understanding of
high-level functions of the brain by mathematical analysis. For demonstration
purposes, the HUMANOID ROBOT performs the Okinawa folk dance "Kacha-shi" and
learns human-like eye movements based on neurobiological theories. It is
noteworthy that the acquisition of the Okinawa folk dance was achieved based on
"learning from demonstration", which is in sharp contrast to the classic
approach of manual robot programming. Learning from demonstration means learning
by watching a demonstration of a teacher performing the task. In our approach to
learning from demonstration, a reward function is learned from the
demonstration, together with a task model that can be acquired from the repeated
attempts to perform the task. Knowledge of the reward function and the task
models allows the robot to compute an appropriate control mechanism. Over the
last years, we have made significant progress in "learning from demonstration"
such that we are able to apply the developed theories to the HUMANOID ROBOT. We
believe that learning from demonstration will provide one of the most important
footholds to understand the information processes of sensori-motor control and
learning in the brain. We believe that the following three levels are essential
for a complete understanding of brain functions: (a) hardware level; (b)
information representation and algorithms; and (c) computational theory. We are
studying high-level functions of the brain by utilizing multiple methods such as
neurophysiological analysis of the Basal Ganglia and Cerebellum; psychophysical
and behavioral analysis of visual motor learning; brain activity by fMRI study;
mathematical analysis; computer simulation of neural networks, and robotics
experiments using the HUMONOID ROBOT. For instance, in one of our approaches, we
are trying to learn a Neural Network Model for Motor Learning with the HUMANOID
ROBOT that includes data from psychophysical and behavioral experiments as well
as data from brain activity from fMRI studies. The HUMANOID ROBOT reproduces a
learned model in a real task, and we are able to verify the model by checking
its robustness and performance. A lot of attention is being given on the study
of brain functions using this new tool: the HUMANOID ROBOT. This should be a
first important step towards changing the future of brain science.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:14 MST