From stapp@thsrv.lbl.gov Thu Oct 23 10:54:08 2003
Date: Thu, 23 Oct 2003 09:51:03 -0700 (PDT)
From: stapp@thsrv.lbl.gov
Reply-To: hpstapp@lbl.gov
To: "Balaguer, Mark"
Subject: RE: your work
On Wed, 22 Oct 2003, Balaguer, Mark wrote:
> Dear Henry,
>
> Thanks for the paper, and yes, I would like a copy of the other one, if it's
> not too much trouble. My address is:
>
> Mark Balaguer
> Department of Philosophy
> California State University, Los Angeles
> 5151 State University Dr.
> Los Angeles, CA 90032
>
I've just sent it off.
> What I'm interested in is your response to Tegmark. I haven't yet looked at
> the paper you sent me in your email, but one response that I thought of is
> this: Tegmark's argument, if cogent, suggests that there can't be neural
> indeterminacies based on macro-level superpositions that collapse due to
> neural processes. But your view doesn't involve macro-level superpositions;
> it involves micro-level superpositions (of presynaptic calcium ions). So
> even if Tegmark's argument is sound, it's irrelevant to your view. Does
> this sound right to you?
>
Yes, the effect certainly does not involve macro superpositions.
But is it correct to say that it involves microsuperpositions?
Even if decoherence effects did extend down to the individual ion level, that
would mean that the density matrix [S(t)] would be reduced to close to
diagonal, but not exactly diagonal, in the coordinate-space basis.
The Schr. Eqn. of motion would, however, move these diagonal
elements around roughly in accord with the classical equations of motion:
i.e., any "bump" in the probability distribution on the diagonal would
move arould in accordance with the classical equations of motion, at least
to first order. Thus classical physics would emerge, in the absence of
"observations." But the von Neumann eqns show that even in this
"classical" case the QZE would inhibit motions in the degree of freedom
corresponding to P: the system would be constrained to stay in the
subspace defined by P=1, or in any case to be inhibited from moving
out of that subspace. So the mechanism that I exploit does not really
involve even micro-superpositions, even though it is definitely a quantum
effect.
Superposition has to do with interference effects; with delicate
effects of cancellation or reinforcement associated with phases. The QZE
associated with observation can, within a many-worlds framework, be seen
as an effect of the elimination of transition matrix elements between two
parts of a system described by S(t) due to the differing effects of
these parts on an "observing" system. One might therefore construe this
elimination of the transition matrix elements as a "superposition"
effect, in a many-worlds approach. But the von Neumann equations present
this effect on S(t) as the postulated effect of an actual collapse.
In any case, the decoherence effect considered by Tegmark are not
supposed to disrupt the classical equations of motion. Quite the contrary,
those decoherence effects are supposed to bring out the classical
behavior. But the QZE "blocks" or "inhibits" the classical motion by
setting to zero some transition matrix elements that Tegmark leaves
intact in order to produce the classical results.
Thus QZE operates at a finer level than Tegmark decoherence.
It operates even if the later is carried to the extreme where free
particle (ionic) motions are reduced to classical motions. Even in
that extreme the QZE blocks the classical motion that Tegmark
decoherence preseves.
Of course, the "observation" is a very "macro" effect: it acts on
a combination of microvariables, so that its effect on any individual
microvariable is almost nill. We are talking about the effect of such a
macro observation on what can be a "mixture of essentially classically
describable particle (ionic) trajectories."
> Best,
> Mark Balaguer
> P.S. I have a paper on libertarian free will that's coming out in NOUS and
> that, I think, you would find interesting. In it, I turn the traditional
> wisdom regarding randomness and non-randomness on its head. As you know,
> enemies of libertarian free will have long argued that libertarianism is
> incoherent because indeterminacy entails randomness. A few libertarians
> (e.g., Kane) have tried to undermine this by arguing that indeterminacy is
> compatible with non-randomness, so that if our decisions are undetermined,
> they still COULD BE non-random. I argue for a much stronger claim, namely,
> that the relevant kind of indeterminacy actually ENTAILS the relevant kind
> of non-randomness, so that if our decisions are undetermined, then they ARE
> non-random. Thus, if my argument succeeds, then the question of whether
> libertarianism is true reduces to the question of whether our decisions (or
> actually, decisions of a certain kind, which I define in the paper) are
> undetermined. And this is a straightforward empirical question. If this
> sounds interesting to you, I could send you a copy.
>
Yes, please send your paper.
Henry
> -----Original Message-----
> From: stapp@thsrv.lbl.gov
> To: Balaguer, Mark
> Sent: 10/22/2003 7:32 PM
> Subject: Re: your work
>
> On Mon, 20 Oct 2003, Balaguer, Mark wrote:
>
> > Dear Henry (if I may),
> >
> > I'm trying to get a hold of a paper of yours entitled "Decoherence and
> > Quantum Theory of Mind: Closing the Gap between Being and Knowing". I
> saw
> > it referenced and the reference sent me to your website, but I
> couldn't find
> > the paper there. Is there some easy way to access it?
> >
> > Thanks.
> > Best,
> > Mark Balaguer
> >
>
> I can send a hard copy, if you want it.
> The attached paper is similar.
>
> Henry
> <<>>
>
From stapp@thsrv.lbl.gov Fri Oct 24 10:38:36 2003
Date: Fri, 24 Oct 2003 10:34:50 -0700 (PDT)
From: stapp@thsrv.lbl.gov
Reply-To: hpstapp@lbl.gov
To: "Balaguer, Mark"
Subject: RE: your work
On Thu, 23 Oct 2003, Balaguer, Mark wrote:
> Hi, Henry. After reading your message earlier, I went back to your paper
> "Pragmatic Approach to Consciousness," where you say that your view involves
> a collapse of the wave function of presynaptic calcium ions. Now, you're
> saying that this doesn't involve a superposition. This confuses me
> (remember, I'm not a physicist) because I thought "collapse" just MEANT
> collapsing out of a superposition state into a non-superposition state,
> i.e., a state involving a unique value for the given observable. Am I
> missing something?
> --Mark
The important distinction here is between the ideas of "superposition"
and "mixture." In quantum theory there are two quite different way of
combining two (or more) possibilities, "superposition" and "mixture."
In a *superposition* the difference in the "phase" of the two parts
are controlled by the experimental conditions. Then the two parts can
"interfere": the two contributions can combine so as to (partially or
totally) cancel if one observable is measured, but to yield "more that the
sum of the individual contributions taken alone" if some other observable
is measured, but such that the sum of the probabilities of any complete
set of observables adds to unity. In a *mixture* the information about the
phase difference (i.e., where the crests and troughs of the two parts are
situated relative to each other) has become lost. Then there is no
"interference": the two contribution add together "independently", namely
independently of the existence of the other part. This phase information
is lost if, for example, either part of the original system interacts
strongly with some other *probing* system, which can be a "measuring
device" which responds in two distinct ways depending on which part of
"the system being probed" it "sees/finds".
In this second case, there will be, by virtue of the automatic working of
the quantum laws, no interference: the two possible outcomes of
the (originally considered) measurement/observation on the (original)
system will be exactly (if the first measurement/observation is "good")
correlated with the distinctive tate of the probing measuring device,
and even if that distinctive state of that probing/observing/measuring
device is not known there will be no interference between the two parts of
the probed (original) system: the probabilities for all observables
"pertaining solely to that originally considered system alone"
will be simply additive.
So, superpositions and mixtures are very different. But in principle
one must keep the (two) parts of a mixture, because if measurements
can be done later on the "full system" consisting of the original system
plus the probing device then the parts coresponding to the two parts of
the mixture can interfere.
All of these features follow automatically from the mathematical formulas,
and have been experimentally verified.
In the case of a brain, the interactions with "the environment" (which
acts as a probe) can reduce the state of some pertinent subsystem of
the brain (say the subsystem that holds/instantiates a template for
action) to a mixture (so that parts corresponding to appreciably
different coordinate values do not appreciable interfere). But the
equation of motion for the state S(t) (the density matrix) will still
be able to preserve to good approximation the essentially classical
equation of motion for the diagonal elements .
The state (density matrix) S(t) contains the pertinent information
about the effects of the environment (the matrix elements
get severely damped for x appreciably different from x'), but the
equation of motion for the diagonal elements involve only the first
and second derivative with respect to x, and hence can be left
pretty much unaffected by the environmental decoherence. Indeed, this
is exactly what tends to make the classical approximation very good: the
environmental decoherence tends to destroy all interference effect
involving appreciably nonzero values of x-x', while preserving to first
order the classical equations for the diagonal elements .
Thus the system is effectively reduced to a "mixture" by the interaction
with the environment, but this does not mean that is set
to zero for all values of x except for one. In the broader context
of the state of the whole universe (as opposed to the state S(t)
of some special part of the brain, defined by taking a trace over the
remaining variables) all of the values of x are still "present":
nothing has picked out some particular value of x as the really existing
one. In the von Neumann framework no reduction takes place except via
Process 1. And this Process 1 will not pick out some particular
value of x. It will take S(t) to P(E)S(t)P(E) + (1-P(E))S(t)(!-P(E)),
where P(E) projects onto the full part of the Hilbert space that
corresponds to experience E.
I hope this clarifies things. If my words above have failed to
communicate, then any quantum physicist (who has thought
much about quantum measurement) should be able to explain
to you what I am saying,
Best wishes,
Henry