From: Wei Dai (weidai@eskimo.com)
Date: Thu Jan 17 2002 - 12:36:41 MST
On Wed, Jan 16, 2002 at 05:03:01PM -0800, Robert J. Bradbury wrote:
> I'm stating that its impossible to build a "maximally productive"
> intelligent agent with stable long term goals. Actually it isn't
> the "goals" that are relevant -- its the means that are allowed
> to achieve them.
>
> To allow maximal productivity you have to allow for "self-evolution".
> That allows the enabling of strategies that say "the matter and
> energy controlled by ones source is forfeit".
Maybe I'm missing some crucial context, but why can't you preprogram the
agent with the optimal strategies for constructing an M-Brain and a
transceiver, and then copy yourself over as soon as that's done?
> Stars have very large gravitational wells. It takes much more
> energy (and therefore a longer time) to extract material from
> such locations.
Are you assuming that we're only using the star's own energy output,
without generating additional energy with the extracted material? A kg of
material from the surface of a sun-like star has -2e11 J of gravitational
potential energy, but it can generate 3e14 J of energy through nuclear
fusion, so it seems quite feasible to disassemble a star quickly using its
own material as fusion fuel.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:44 MST