From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 18 2002 - 07:47:32 MDT
Eugen Leitl wrote:
>
> On Sat, 18 May 2002, Wei Dai wrote:
>
> > But you're welcome to discuss any of the other ones, or your own
> > scenario if you don't like mine. We could talk about how a Sysop
> > should allocate resources among its "users", or if all SI's were to
>
> I recommend you do a back of the envelope as to resources required to run
> the Sysop (even short-scale decisions require sampling a lot of agent
> trajectories with very detailed agent models), and the constraints it
> imposes on its serfs.
Show me the numbers. You have repeatedly declined to provide any
justification whatsoever for your assertion that "even short-scale decisions
require sampling a lot of agent trajectories". Now you are claiming the
ability to do quantitative modeling of superintelligent resource
utilization?
You clearly do not understand even the basic rules of volition-based
Friendliness or the Sysop Scenario, both ordinary futuristic scenarios, much
less the principles of Friendly AI.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:11 MST