http://www.cnn.com/2000/TECH/space/12/27/part.four/
The Real HAL: Artificial Intelligence in Space
December 29, 2000
Web posted at: 7:54 a.m. EST (1254 GMT)
"To err is human, to forgive divine."
-- Alexander Pope
"I'm sorry, Dave, I'm afraid I can't do that."
-- HAL 9000, 2001: A Space Odyssey
(CNN) -- When we first meet HAL in "2001," it seems as if there's nothing this computer can't do. HAL steers the Discovery spacecraft toward Jupiter, maintains the life support systems, plays a wicked game of chess, has opinions about sketches drawn by one of the astronauts, and deftly handles questions from a BBC interviewer. Later, we'll discover that HAL can even read lips.
Soon we wonder whether HAL is more like a human than a machine. Is its artificial intelligence really artificial? Even one of the astronauts admits he's not sure whether HAL's emotional responses to the interview questions are genuine or just part of the programming.
HAL's downfall begins when he exhibits one of the basic qualities of being human. He screws up. The astronauts begin to doubt his abilities and the plot takes its well-known conspiratorial and murderous twists.
At the beginning of the year 2001, we have no computers with the impressive abilities of HAL. But we do have Deep Space 1. This half-ton spacecraft was launched in October of 1998 for the purpose of testing new space technologies, including artificial intelligence.
"This is the first time that any spacecraft has truly relied on artificial intelligence," says Dr. Marc Rayman, chief mission engineer for Deep Space 1 at NASA's Jet Propulsion Laboratory. "The whole point of Deep Space 1 is to test advanced technologies that are very risky, too risky for science missions to rely on. Deep Space 1 takes the risks so that future missions don't have to."
In some ways, Deep Space 1 is a lot like HAL. The Autonomous Navigation system steers the craft by comparing its position to well-known stars and asteroids. It can think for itself by using software called a Remote Agent. This system is given a set of goals by ground controllers, but it decides how to carry them out. The Beacon Monitor provides feedback to the Earth, with messages ranging from "everything's fine" to "I need help now!"
"All three of these (systems) are able to make decisions on their own. And that's really the key," says Rayman. "With the autonomy, we transfer responsibility from humans on the ground onto the spacecraft. "
Of the three technologies, the Remote Agent is the one that's truest to the definition of artificial intelligence. Rayman says controllers first tested the software by giving it a series of activities to execute over several days. What happened after that was not what they anticipated.
"We were quite surprised," he says, " because it wasn't the plan we expected it to put together. It didn't reproduce what we expected on Earth because it had a different set of conditions on board. And of course that's what artificial intelligence is all about."
>From there the tests only became harder. Controllers gave the Remote Agent four simulated failures. "In one case it tried to switch a device off and it wasn't able to, " Rayman says. "And so the first thing it tried to do was switch it off again.
"If you think about it, that in itself is a pretty impressive response. But, it wasn't able to switch it off and so it had to formulate a new plan that accounted for the fact that this device was on even when it wasn't supposed to be on."
Rayman says this kind of independent thinking becomes more important as spacecraft wander further out into the solar system. A probe on the other side of a planet wouldn't be able to communicate with controllers and might need to make accurate choices on its own. A mission to Pluto would be so far away that it would take too much time for the spacecraft to wait for instructions. A spacecraft landing on a comet may have to make such rapid fire decisions that it couldn't bother consulting with humans.
But what about HAL? Is it possible that someday the computer will decide that the humans are no longer necessary?
"At some point it's possible that an artificial intelligence system on board a future spacecraft could decide that it knows even better than what its human instructors told it and it could completely change the plans." says Rayman.
"Now, if you've built a really good artificial intelligence system, maybe it's making the right decision. On Deep Space 1, it made some decisions different from what we expected, but they were the correct decisions because it had more information than we had.
"At some point you have to develop confidence in it. And I suppose it's like people with children who at some point you have to say, I'm sending my children out into the world. I've taught them the best I can and now it's up to them to make the right decisions."
This child is a long, long way from home. Deep Space 1 is currently 217 million miles (350 million kilometers) away on the other side of the Sun. If all goes well, it will pay a visit to the comet Borrelly by September of 2001.
Aumentar!
Onward,
-------------------------------------------------------
Ziana Astralos GCS/MC/IT/L/O d- s-:- a?
ziana@extrotech.net C++++ U P+ L W+++ N+ w+
M-- PS+++ PE Y+ PGP-- t+
T.E.C.H. 5++ X R tv+ b+++ DI++++
http://www.extrotech.net D+ G++ e- h!>++ !r y-
-------------------------------------------------------
__________________________________________
Get your free domain name and domain-based
e-mail from Namezero.com
New! Namezero Plus domains now available.
Find out more at: http://www.namezero.com
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT