Michael Nielsen wrote:
>
>
> I may as well state one of my main interests in this: whether we'll ever
> have enough computational power to set up a good "breeding ground"
> for AIs -- an artificial environment optimized to produce artifical
> intelligence by means of selective pressure. Using proctonumerology, I'd
> guess a figure of about 10^40 operations ought to be enough.
>
I haven't a clue. What does Moravic say, and should we believe him? Are
these operations per second?
> > > On Fri, 26 Jun 1998, Dan Clemmensen wrote:
Nope, I made it up as I went along, mentally designing the system on the
fly and probably messing it up severely. I now strongly suspect that between
the two of us we are thinking about three different things: depth with O(log N)
complexity, number of gates needed with O(N), and energy dissipation, which will
depend strongly implementation, with 0(N) required for a minimized depth (i.e.,
depth of 2, fastest algorithm) and O(log N) required for a slow but energy-efficient
approach with one address bit per clock.
> > Addressing in a nanomechanical system occurs only as part of a read or write
> > operation. When no I/O is occurring, no energy is dissipated. Dissipation per
> > access depends on the size of the memory, O(logN) for random access,
> > which is negligible and is mitigated further by caching.
>
> I don't know how the proposed nanomechanical schemes work. In
> commonly used electronic schemes, I believe that the depth is O(log N) for
> random access, but the number of operations is O(N log N), at least in the
> schemes I'm familiar with. Are you absolutely sure that the number of
> operations in a nanomechanical addressing systems is O(log N)?
>
> Okay. I am, I suppose, trying to see how far we can push the 3d
> architecture idea at this point. We already do it to some extent -- I am
> told that 20 layers is not that uncommon in a chip -- but I would like to
>
Sure, It's more fun.
>
> > By contrast, you raise the issue of error correction.
> > However, even very powerful ECC schemes require less than doubling the amount
> > of volume needed to store a word.
>
> That's not really true. To do fault-tolerant computation the best known
> overhead, so far as I know, goes polylogarithmically in the size of the
> computation, with some (fairly large) constant factor in front of the
> first term in the polylog factor.
>
Here are are again shifting back and forth between ECC for storage and ECC for
computation. I was addressing storage, wherein we are not using reversable
logic. For storage, as I recall the Hamming distance goes up rapidly with a modest
increase in ECC bits. In conjunction with cashing, the bulk memory words
subject to ECC will be long. The nuber of ECC bits goes up O(log N) with
word length and linearly (?) with the number of errored bits that can be corrected.
> The error correction schemes you are talking about will buy you a little,
> but ultimately, fixing one bit errors goes only a tiny way to fixing the
> error problem. Much more powerful error correction techniques are
> needed to really solve the error problem, unless, as in today's
> computers, you have an incredibly low fundamental error rate. Even
> then, there's a real question of how many operations you want to do. If
> you "only" want to do 10^{17} operations, then error rates of about
> 10^{-17} are fine -- which is why today's computers don't use error
> correctyion. Presumably, for AI we would like to do far more operations
> than that -- on the order of 10^{30} does not seem unreasonable.
>
> Assuming a fundamental error rate of about 10^{-5} for reversible
> computation, that implies a heavy error correction overhead. Doing a
> calculation of how much overhead for error correction would be required
> would take quite a while, but it's safe to say that most of the work going
> on in the computer would actually be error correction, not computation.
>
You are now discussing error correction during logic operations rather than
storage, and you are correct as far as I know. Brute-force correction for
logic has been done by "voting", where three identical circuits are implemented
together with comparison logic. If the comparison logic is about the same complexity
as the original circuit, the entire ECC scheme is four times as complex and this
ony defends against a one-bit error (sort of.) A five-way vote defends against
2 errors by roughly doubling the complexity again.
> If I can find the time, I'll try to look up some of the papers analyzing
> errors in reversible computation. As I recall, there were a few in the
> early 80s, which I've never read.
There was at east one paper delivered at the 1997 foresight conference:
http://WWW.FORESIGHT.ORG/Conferences/MNT05/Abstracts/Frakabst.html That deals with reversible computation.