[p2p-research] Drone hacking

Tere Vadén tere.vaden at uta.fi
Thu Dec 24 11:30:17 CET 2009


J. Andrew Rogers wrote:
> There is no strong theoretical reason to believe they are genuinely
> random, so Occam's Razor would favor the "deterministic but not
> measurably so in this universe" hypothesis.  

No? What about Bohr contra EPR? At least Einstein found that argument 
sound, even though he tried to find holes in it for years. In fact, some 
people have claimed that Bohr's argument for complementarity is one of 
the most beautiful & precise arguments in the history of natural 
science. Some of those people are mathematicians.

If you mean that randomness was forced upon physicists by empirical 
results, and that there were no pre-existing strong theoretical 
arguments, then I would agree (maybe). But after the fact, after the 
experiments like black-body radiation, double-slit interference etc., 
several rigorous theoretical arguments have appeared, out of which 
Bohr's and Heisenberg's are only the best known. (The funny thing is 
that Heisenberg was trying to device a counter-argument, which turned 
out to be an argument, after all.)

 From the perspective of
> physics the consequences are indistinguishable; it does not make a
> difference one way or the other.

Oh but it does. The whole point of Bohr's argument is that quantum 
randomness is of a different kind than classical ("deterministic but not 
measurably so in this universe") randomness. Quantum randomness is not 
an effect of dependence of initial conditions, but a physical "brute" 
fact. Therefore whatever your interpretation of quantum mechanics (the 
mathematics of it), you are bound to have something non-classical in 
nature (complementarity, incompleteness, uncertainty, many-worlds, or 
what have you). One place where the difference appears is, for instance, 
the question of whether quantum particles have a unique path from source 
to finish in the double-slit experiment. In the classical "deterministic 
but not measurably so in this universe" view the path does exist, but in 
the non-classical view such a path does not exist.

Occam's razor cuts both ways. QM is probably quantitatively the most 
precise theory we have, also one of the best corroborated. So why not 
say that the apparent deterministic nature of classical theories is, in 
fact, a result of the idealizations and approximations in those 
theories, and that the simple explanation is that nature is random.

> 
> I think "it's random" is the preferred popular view in physics because
> it does not require further explanation and generates the same result
> for all practical purposes as a more nuanced non-local determinism
> model. Not terribly important either way at the moment.


The Copenhagen interpretation & Bohr's complementarity are such further 
explanations. Again, you are probably right that most working physicists 
don't care one way or another, but that does no mean that randomness 
would not have been fully theoretically embedded.

What non-local deterministic model do you have in mind? Bohm's?

> 
> 
>> It is also possible
>> that there are quantum phenomena that are relevant for the functioning of
>> the human nervous system (e.g., the retina reacts to a single photon;
>> exocytosis relies on quantum tunneling, etc.). If these quantum phenomena
>> have some cognitive/experiental relevance (and there is very little reason
>> to say that they don't, if we already accept that the nervous system and
>> cognition are somehow coupled), then there is a very natural way that
>> genuine randomness may be a part of human cognition/experience, and also
>> behaviour.
> 
> 
> The determinism is externally measured, so even if there were quantum
> effects in cognition it would not materially change predictability. 

The determinism is externally *produced*, by introducing idealizations & 
approximations.

To
> the extent there *is* unpredictability, it can be explained by a
> classical non-quantum machine, so quantum explanations don't buy much.

No, it can not, if the unpredictability is of the non-classical kind. 
This is pretty much the reason why QM exists in the first place.

> 
> Quantum computers are equivalent to conventional silicon, they just do
> certain operations much faster. To an outside observer measuring
> determinism, they will look the same.

Again, the measurement is performed in a way that makes sure that 
quantum effects are idealized away & canceled out (otherwise we would 
not get it to run the algorithm we want). It is small wonder then that 
they look the same.

> 
> At the end of the day the human brain may be a quantum device, but for
> the purposes of behavioral predictability it is indistinguishable from
> a classical computing device. It is almost an orthogonal argument.

No it is not. Like you pointed out at the beginning, one crucial 
question is which way Occam's razor is supposed to cut. If one would 
want to argue for the view that the razor should cut away classical 
determinism, one supporting observation could be that the behavioral 
predictability that you speak about does not exist, other than as an 
idealization. So the claim about full deterministic predictability is 
exactly the kind of unnecessary complication that Occam is talking 
about. We can do science, more simple science, without the claim.



More information about the p2presearch mailing list