From: Billy Brown (bbrown@conemsco.com)
Date: Fri Apr 09 1999 - 09:32:58 MDT
Lyle Burkhead wrote:
> Apparently you missed that post.
Yes, I did. My apologies.
> There are three questions at issue here. In decreasing order
> of importance,
> they are:
>
> 1. Whether my method of argument is valid and useful.
I would say that it is. I think it is only one of several techniques that
need to be used in making predictions, but it is a very useful tool for
pruning back wild speculations.
> 2. Whether my conclusions are true.
Some of them are. Others seem suspicious to me, and a few seem completely
wrong - although I may just be misinterpreting you.
> 3. Who I am, whether I know anything about computer science or anything
> else, whether I did a good job of applying the principle of calibration,
> etc.
>
> Question #3 is of no relevance. Maybe I'm an idiot. That has nothing to do
> with it. As I said on the home page of geniebusters, Eric Drexler is not
> the issue here. Neither am I.
Agreed.
> Question #2 is more important. Is someone going to design the entire
> "universal assembler," in advance, in such a way that once set in motion
> the whole thing suddenly comes into existence with explosive force? Are
> robots going to build skyscrapers for free? Is there going to be an event
> that is incommensurable with everything else? or not?
Obviously not - or at least, not if it is being done by humans. I would
expect something more like the computer revolution in that case. Getting a
really sudden change requires you to make very strong assumptions about the
progress of AI, which would be rather speculative at this point.
> > Ah. So you do still read the list. I wondered.
>
> You wondered? In other words you were on the list two years
> ago?
No, I just read the archives.
> However, this afternoon I find myself writing yet another
> post to the list, the fourth one this week, in spite of my repeated
efforts
> to stop. I may have to join some kind of 12-step program for postaholics.
> Maybe a Higher Power will help me do what I can't do myself.
We're all addicts here. :-)
> > Your assertion that any software capable of replacing a human
> > would demand a salary
>
> I'm not asserting this for software capable of replacing *a* human, just
> for software that replaces humans who make unsupervised decisions
requiring
> human judgment.
"Artificial Intelligence: The science of making computers do things that, if
they were done by humans, would require intelligence."
I think there is a lot more potential for AI here than you seem to expect.
However, this is a complex topic that deserves a real discussion, so I'll
save it for a later post.
> That section isn't supposed to be read in isolation. It is part of the
> argument leading up to Exercise 5 in Section 7, and it also has to be read
> in conjunction with sections 12 and 13. The question is whether the
> Assembler Breakthrough is going to happen. To get a handle on that
> question, I'm trying to establish the level of complexity involved in "a
> system that can make anything, including copies of itself." If a
nanosystem
> can make anything, in the sense required for the Assembler Breakthrough,
> then it will amount to the same thing as an industrial economy.
Once again, this is long enough that it really needs its own post. I'll get
to it as soon as I can.
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:31 MST