Eliezer S. Yudkowsky wrote:
> (Let's at least keep the SUBJECT lines clean enough that a 12-year old
> can read this without Mom noticing anything wrong, okay?)
Point taken.
> I disagree. A cat is a cat, even a cat with speech recognition,
> personality software, and domestic skills. (WHAT an idea! Probably
> more profitable than the ACs, if someone pulled it off.) In fact, the
> level of intelligence is more analogous to that of a spider.
For a minimally functional device this may be true. I suspect, however, that more intelligence will be required to make something people will really be happy with.
> The really ironic thing in most discussions of Artificial Intelligence
> is that the pundits treat emotions as a great mystery, and talk about
> AIs with human intelligence who still don't understand emotions. What
> rot! Dogs have emotions. Rats have emotions. Lizards have emotions.
> Emotions are easy. We'll have emotional AIs long before we can
> duplicate the simplest rational thought. We could make them
> right now, if anyone wanted them.
Granted - emotions aren't trivial, but they don't look all that hard either.
> How much actual intelligence does this require? Answer: None. There
> is no component here which is capable of original thought. At most, the
> sophistication would equal that of the Structure Mapping Engine or Cyc;
> it would fall short of Copycat or EURISKO.
> The reflexes are undoubtedly quite complex, but while the present
> ambient "mode" of reflexes might be linked to both senses and speech
> command acceptance, the complexity of the reflexes themselves
> would not be integrated with the personality front.
> But this is really over-complicating
> things up to the "lizard" level; the "spider" level of discrete,
> non-reduceable modes should be quite sufficient.
For a really primitive AC, this approach might work. However, I don't think anyone is going to be satisfied with the results. The problem is that if the AI doesn't understand what it is doing, at least on a mechanical level, you're going to have serious problems getting any real flexibility out of it.
What you really want is an AC that can comprehend spoken English well enough to carry out instructions in several unrelated problem domains. This NLP system needs to be integrated with a sophisticated motor system, equally sophisticated sensory processing, a personality simulator, and an AI with enough intelligence to translate "clean the place up a bit" into a series of physical actions. It needs a world-state model, so it can figure out what it is doing, and it needs at least a primitive method of modeling the internal mental states of humans (so it can tell when you're not in the mood). Now, this still isn't a sentient being, but its a starting point that could easily lead to one with the right enhancements.
Fundamentally, I think that for projects of this scale the simplest way to get intelligent behavior is to actually give it some intelligence. Trying to get the same result with a mountain of domain knowledge and some simple rule-based processing will take far more effort and give far more limited results. Given the competitive demands of the market, that means that the high-IQ approach is likely to win out.
As a side note, I would expect most of this software to be commercially available by 2020. Certainly the motor & sensory systems will be commonplace, and there is a strong demand for NLP. Simple domestic robots would seem to be much simpler than an AC, so the integration of these basic technologies should be well underway by the time anyone can seriously contemplate building ACs.
Billy Brown, MCSE+I
bbrown@conemsco.com