From: Byrne Hobart (bhobart@gmail.com)
Date: Sat Apr 26 2008 - 21:04:17 MDT
On Sat, Apr 26, 2008 at 6:41 PM, Tim Freeman <tim@fungible.com> wrote:
>
>
> >If we have the right to make our future choices, then we can have the
> right
> >to make choice A in exchange for something that we can use to acquire CDs.
> >So if you have the right to choose whether or not to work, you can choose
> to
> >work and get money in exchange. If you deny this, then you're denying that
> >your future choices are yours to make, which seems radical.
>
> I'm sorry, I can't distinguish that from word salad. My nine month
> old son makes choices and has no conception of property rights. So a
> world without property rights is surely compatible with making
> choices. I don't know what it means to make a future choice; all I
> can do is make a choice right now. What does it mean for a right to
> be "yours" to make if we are trying to figure out exactly what
> property rights mean? I can't see any traction for trying to make
> sense of what you said.
Property rights are an abstraction that describes how people behave, so the
fact that your son doesn't understand them won't make them go away.
'Future choice' refers to a choice in the future. For example, if you had no
property rights now, and someone offered you $5 to mow their lawn, you could
get that $5 of property by committing to a particular action. If you're
incapable of making commitments regarding the future, it would be hard to
have property rights, though.
I have already argued that property rights are an abstraction covering
certain kinds of agreements. If you and I agree that you have the right to
use, move, sell, or destroy a particular object, we can claim that you 'own'
it as the word 'own' is presently used. This 'owner'ship is really the
ownership of specific rights regarding the object, and those rights are just
collections of implicit agreements mediated by a judicial system -- they
recognize your 'ownership' by punishing people who attempt to use, move,
sell, or destroy the object without your consent. So it all compiles down to
mutual agreements; the abstraction just makes it easy to talk about. If you
want, we can go ahead and use the non-abstract version: instead of saying
that I 'own' this computer, I can try to list all the people who are not
allowed to type on it, not allowed to move it to their homes, not allowed to
incinerate it, etc. But that sounds like a clumsy way to do things.
Of course, if you don't believe in such agreements, we can't have property
rights as I think of them. But it seems like a pretty elementary part of
human behavior to make promises about the future. If we can't do that, we
can't have governments, and most forms of anarchism won't work, either. I
guess we'd be stuck with Stirner.
> In practice, hunger trumps all of those abstract concepts you're using
> there. I'm sure that even if I were completely incompetent to do
> anything but beg for food, I'd be doing that if I was hungry enough.
> At least 95% of everyone else is the same.
I'm sorry, but playing the "Examples are more concrete than principles" card
doesn't work. It is true that, when some people are hungry, they care more
about dealing with their hunger than adhering to their moral system or
maintaining the order of whatever society they exist in. That doesn't
justify their behavior. It's a diagnosis, not a defense. In any case, the
two outcomes I mentioned are valid: either someone can produce the value
that justifies their cost, in which case this is an execution problem that
should get easier with more advanced intelligences, *or* the person isn't
worth the expense of keeping them around, in which case -- like everyone
else who eventually is unable to sustain their own existence even though
with more resources they could -- they die.
When you hear about someone being hungry, do you assume that they are a)
worth something, and thus that it is a tragedy that they can't go on living
and producing for their community, or b) that they are not worth the food
they eat, the air they breath, the space they take up, etc., that it is thus
natural that they would expire, but that it is somehow not righteous to let
this happen? If a), you are introducing force and rights-violation to solve
a problem that can be solved peacefully without violating anyone's promises
or obligations. If b), I'm not sure you have an argument.
> Right. Specify what a common rights-protocol would look like, and
> this conversation will be worthwhile. Hutter's AIXI paper gives an
> example of using inductive inference to write a nontrivial algorithm;
> you might want to start there.
I'm starting the AIXI paper right now. But while I do that, I have to ask
what sort of model you're using for an intelligence if it cannot commit in
advance to preferring a given outcome or choice, especially in the context
of being rewarded or punished for choices. As long as you have entities
capable of both of these, and with preferences for outcomes, you can end up
with property rights as discussed above. In fact, the ingredients might boil
down to: a utility function, an ability to make choices, and an ability to
make decisions in the present affecting choices in the future.
It's obviously possible to contrive circumstances where this is the
> case. There are emergency situations where it's pretty commonly
> agreed that taking something and offering to pay something reasonable
> for it after the fact is a reasonable thing to do.
You are suddenly bringing up a whole different issue! This has been
discussed at length in a number of works on market-based law. I would
recommend David Friedman's *Law's Order* in particular as a good overview.
Essentially, that would be up to the rights-enforcing agents, but usually
these rights are enforced via a torts system. Perhaps when I do $10 of
damages, I can make good on the harm I caused and the surprise and
inconvenience by paying your lawyer/enforcement fees, plus thrice the damage
I caused. Ending up paying $1500 for stealing a meal I had to steal in order
to stay alive is a pretty good deal, frankly. Would the AI allow it? Even if
I had to go to prison for that theft, expected lifespan - prison term >
lifespan given no more food. Probably.
People will probably prefer a rights-enforcing organization that makes
allowances for such extraordinary circumstances. In that case, your right to
the meal would not quite be outright ownership -- in financial parlance (the
easiest, since they've been formalizing this stuff for hundreds of years)
you have ownership, but are short a call option exercisable by someone in
extremely desperate circumstances, who transfers a large debt to you in
exchange for immediate food. Once you think of it that way, it's a lot
easier not to get worked up about it -- yeah, the AI might not care too much
that you miss a meal and have to order in, but only because in the contract
you signed, you agreed that a large fine was more than enough to make up for
this inconvenience.
A world that worked that way might sound a little more brutal, but that's
only because it is a lot more honest. Organizations that guarantee property
rights, rather than 'human rights', don't fall into the trap of having
conflicting supergoals, and of continuing to make an investment in the wrong
things when their marginal utility approaches zero. If you're going by vague
ephemera, whereby 'hunger' out of context (a strung-out drug addict who
spent his last $10 on a dimebag?) trumps 'wealth' out of context (someone
providing jobs -- and thus meals -- for hundreds of people?).
I would be really interested in how you would formalize your views. What
principles do you start with to derive not-property-rights from the very
small set of pretty fundamental attributes I argued would automatically lead
to such rights?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT