Re: Long term hazard functions

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Dec 13 2002 - 18:17:46 MST


Robert J. Bradbury wrote:
>
> That (potentially) is our hazard function. (As I mentioned
> in a previous post supernovas should not be ignored as well,
> though they may be less frequent.)
>
> So the question(s) in my mind become should we fix this and how
> do we do so? (Or do we punt and say the "singularity" will take
> care of everything?).

No, but you might punt and say: "Given that the payoff of any mental
efforts invested will be post-Singularity, we may as well wait until after
the Singularity where mental effort of vastly higher quality will be much
cheaper, unless exploring these issues is likely to lead to important
concepts which can be reused for pre-Singularity issues, or unless there
are issues that affect pre-Singularity strategy."

Pers'nally, I'm a believer in curiosity. Trying to grok post-Singularity
issues often *does* lead to critically important reusable concepts, nor is
it fully predictable in advance which avenues of curiosity will be
productive or unproductive.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:44 MST