From: Daniel Fabulich (daniel.fabulich@yale.edu)
Date: Sun May 31 1998 - 14:12:13 MDT
On Sun, 31 May 1998, Michael Nielsen wrote:
> On Sat, 30 May 1998, Daniel Fabulich wrote:
>
> > On Sat, 30 May 1998, Michael Nielsen wrote:
>
> I notice that you did not respond to the rest of my post.
Right, well, you said:
---- I'd put it differently: it acknowledges that any non-trivial system of thought requires certain founding assumptions. Using these founding assumptions is not an irrational procedure, unless you wish to declare all systems of thought irrational. In the case of transhumanists, the founding assumptions, or basic values, are such that life extension is a rational goal. Other systems of thought do not necessarily imply that. ---- Obviously I do not wish to declare all axiomatic systems of thought irrational (though, interestingly enough, this is a central idea in pan-critical rationalism), however, I instead asserted that a value-ethic is rational if and only if it is consistent; ie it does not advocate actions which contradict its stated values. > I agree with your entire post, excepting part of the first paragraph. The > post is not really related to my original comment, though. Recall that the > comment I made was a caveat, which I do not seriously expect to apply in > many instances. For example, it was not intended to apply to Hayflick. I > was merely pointing out that it is possible to hold a consisitent > philosophy which leads one to conclude that immortality is a bad thing. And it was my attempt, in showing that choosing to die necessarily contradicts other values, to show that this is not the case, and one must be inconsistent in order to conclude that immortality is a bad thing. > As a counter-example to your first paragraph, I offer euthanasia, which I > believe is sometimes justified. This is not a counter-example to your > conclusions, incidentally, but to your assumptions: this is a situation > in which your life may have no use to you in bringing about the things > you value. I see euthanasia as being not-rational for the one whose suicide is being assisted, but possibly rational for the one assisting; again, so long as this does not contradict their value system (which need not incorporate OTHER living beings in order to remain consistent but must at least include oneself) then the assistant is acting rationally. If immortality is possible, however, it seems difficult to rationally justify suicide in any form: So what if you've got an "incurable" disease? With cryonic suspension you could have immortality, and your painful disease will be cured and you will go on to live a long, healthy, and prosperous life.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:09 MST