*[Other parts in this series: 1,2,3, 4,5,6,7,8.]*

*I am firmly of the view…that all the sciences are compatible and that detailed links can be, and are being, forged between them. But of course the links are subtle… a mathematical aspect of theory reduction that I regard as central, but which cannot be captured by the purely verbal arguments commonly employed in philosophical discussions of reduction. My contention here will be that many difficulties associated with reduction arise because they involve singular limits….What nonclassical phenomena emerge as h → 0? This sounds like nonsense, and indeed if the limit were not singular the answer would be: no such phenomena.*— Michael Berry

One of the great crimes against humanity occurs each year in introductory quantum mechanics courses when students are introduced to an limit, sometimes decorated with words involving “the correspondence principle”. The problem isn’t with the content per se, but with the suggestion that this somehow gives a satisfying answer to why quantum mechanics looks like classical mechanics on large scales.

Sometimes this limit takes the form of a path integral, where the transition probability for a particle to move from position to in a time is

(1)

where is the integral over all paths from to , and is the action for that path ( being the Lagrangian corresponding to the Hamiltonian ). As , the exponent containing the action spins wildly and averages to zero for all paths not in the immediate vicinity of the classical path that make the action stationary.

Other times this takes the form of Ehrenfest’s theorem, which shows that the expectation values of functions of position and momentum follow the classical equations of motion. For any operator without explicit time dependence, we take the expectation value of the Heisenberg equation of motion to get . So we can derive^{a } the Hamiltonian equations of motion for the expectation values:

(2)

This is often augmented with some hand-wavy discussion of sharply peaked wavepackets and higher derivatives of potentials, by which one argues that the wavefunctions of systems that are large compared to should have simultaneously well-defined values for and . (Or rather that the -sized error from the uncertainty principle is small compared to the system.)

This sort of reasoning is an atrocity committed against the mind, not because it’s not useful, but because people are often given the impression that this is a reasonably complete explanation for how classical mechanics arises out of quantum mechanics. Inevitably, the careful student experiences a healthy sense of confusion and dissatisfaction when learning this material, but they are encouraged to suppress these intuitions. Hey, von Neumann made these kind of arguments and he’s a hell of a lot smarter than you and me, right? Once they get in a habit of suppressing their BS-meter, things go bad quickly.

Unfortunately, even von Neumann is fallible. The proper way to teach the above material is to learn it for what it is: It clearly points toward an important insight, but ultimately cannot explain the quantum-classical transition on its own.

Now, I can’t give you a complete understanding of the the quantum-classical transition. If I could, I’d just write that down, post it to the arXiv, and go get a well-paying job in finance or something. But what I can do is point out what the heck is wrong with the above. And we *know* there has to be something wrong with the above; it’s just a simple limit, and yet all the crazy confusion with measurements and probabilities and Wigner’s friend is suddenly supposed to evaporate?

The most glaring problem is that the state spaces of classical and quantum mechanics are completely different, so you can’t have a simple limiting procedure unless you describe how you’re going to map one onto the other. Let’s compare with a much nicer case like the limit in which special relativity reduces to Galilean kinematics. In this case we need to take a configuration space (points in ), a set of trajectories (time-like worldlines in ), and a set of group transformation (the Lorentz transformation), and map it to their limit as . This yields points in , time-monotonic worldlines in , and the Gallilean transformation, respectively. The mapping is intuitive and unambiguous. If we want, we can add dynamics and get the limit of relativistic mechanics to Newtonian mechanics.

However, the quantum-to-classical case is vastly more complicated because the quantum state space (Hilbert space) is way bigger than classical configuration space (or phase space). Exponentially so! It might be hoped that the *accessible* state space of quantum mechanics becomes the same size as classical mechanics in the limit. Shouldn’t sufficiently narrow wavepackets stay narrow?

Nope, that’s not good enough; see Figure. Wavepackets get grossly distorted — producing macroscopic coherence — on a time scale given by the Lyapunov exponent of chaotic systems. (This applies to almost everything besides a harmonic oscillator.) Even an extremely macroscopic variable like the orientation of Hyperion, a moon of Saturn, fails to maintain a narrow wavepacket on human timescales . Of course we only see Hyperion oriented in one, unpredictable way, so the quantum stochasticity^{b } has not been beaten back by taking . Any explanation of the quantum-classical limit worth its salt is going to need to handle this one-to-many nature of quantum time evolution.

We might not be able to fully understand the quantum-classical limit in this post, but we can at least say a few things about what sorts of state spaces we ought to be considering. The exponential size of the quantum state space (Hilbert space), and the fact that it’s not much smaller than the *probabilistic* extension to the space of density matrices, is a clue that maybe we shouldn’t be looking to limit toward classical configuration space or phase space. Rather, let’s look at classical *probability distributions* over phase space. This is consistent with the above observation that quantum mechanics remains stochastic even for vanishing , since we expect our classical probability distribution to gain entropy as time goes on (which couldn’t be captured by a limit involving just classical points in phase space).

In other words, the simple derivation of the limit given in introductory quantum classes break down because of what is essentially an *anomaly*^{c }. The classical dynamics have a “symmetry” — determinism — that is not enjoyed by the quantum dynamics even in the limit.

So instead of trying to derive Hamilton’s equations,

(3)

let’s look to Louiville’s equation,

(4)

where the Poisson brackets define a binary operator on the space of functions over phase space:

(5)

(Louiville’s equation of course just reduces to Hamilton’s equations when the position and momentum of the particle are known with certainty, .) Luckily, quantum mechanics comes equipped with a phase space representation, where the density matrix is replaced by a Wigner function (previous posts: 1, 2) and the von Neumann equation is replaced with the Moyal equation

(6)

where the Moyal bracket is a different binary operator on the space of functions over phase space. I’ll just quote the definition up from the Wikipedia page^{d },

(7)

since I actually have embarrassingly little intuition for the Moyal bracket other than the fact that it clearly reduces to the Poisson bracket as . It’s the Moyal bracket (not the Poisson bracket as erroneously envisioned by Dirac) that takes the role of the quantum commutator within the phase-space formulation; the Poisson bracket is only recovered after .

OK, now we are actually getting somewhere. Rather than have two theories that look completely foreign to each other — wavefunctions versus points, operators versus functions — quantum and classical mechanics are starting to look pretty similar, at least in terms of the basic mathematical objects. (Interpretation, measurement, and all that crud still must be dealt with, but we will not do so today.) The state space of both consists of functions over phase space, the dynamics of both are expressed in terms of a binary bracket operator with a Hamiltonian, and the Hamiltonian are now (essentially^{e }) the same in each theory!

But we aren’t done achieving even our modest goals, and the reason is this: although the state space of quantum and classical mechanics are both normalized functions over phase space, these state spaces are *not* exactly the same. In the classical case, is always positive but in the quantum case the Wigner function is generally not. Furthermore, this is not a restriction that goes away in the limit. In this limit, the Wigner function for a coherent state (i.e., Gaussian wavepacket) approaches a delta function (which can be interpreted as an distribution as we expect), but for a *superposition* of two states does not approach the sum of two delta function. Rather, there are fine oscillations — sub- structure — that become more extreme as approaches zero (see Figure). Some quantum states have sensible classical analogs, but some, like this grossly non-classical superposition, do not.

However, this is exactly what we should expect. We already know that it should be possible, at least in principle, to create macroscopically quantum states, and we expect any derivation of a classical limit to break down here. So what is the missing ingredient that usually leads quantum mechanics to look classical for systems small compared to , but that is not a strict mathematical inevitability? *Decoherence*. Indeed, isolated systems will generally produce grossly non-classical states, as evidenced by negativity in their Wiger function, even when they are initialized with nice coherent wavepacket states. But this is almost always destroyed by including even miniscule interactions with an environment . In fact, for the simplest cases of ideal quantum Brownian motion one can show that *all* initial states become *exactly* positive in finite time !

This suggests that we were right to dispense with the restriction to pure states when seeking a quantum-classical limit (motivated above by the observation that quantum mechanics is “intrinsically” stochastic, whatever that means). The decoherence that looks necessary to recover classical mechanics in the limit of quantum mechanics can only be obtained if we allow for open systems, which necessarily means dealing with density matrices rather than just pure states.

OK, but isn’t it disappointing that understanding this limit, as opposed to the limit of special relativity, appears to require a messy discussion of decoherence in particular systems with particular interactions? And we’re still confused about all that business with measurement and probabilities that mysteriously disappear in classical mechanics, right? And couldn’t we keep considering larger and larger systems until we got to the whole universe, prohibiting any reliance on open system dynamics? Yes, yes, and most definitely yes.

**Edit** 2016-2-7: Philosopher of science Joshua Rosaler has a recent article making similar arguments in *Topoi*: ‘Formal’ Versus ‘Empirical’ Approaches to Quantum–Classical Reduction (PDF.) A video of his talk on this at PI available here. See also his reference to the work of Batterman, with more technical discussion of the oscillatory singular nature of the limit.

**Edited** 2016-12-12: (added Berry quote).

### References

### Footnotes

(↵ returns to text)

- Here, assume for simplicity that we have a Hamiltonian operator that can be expressed as a power series in the operators and , i.e., . (This is possible for any Hamiltonian that corresponds to a classical analytic Hamiltonian function on phase space , although there may be many quantum Hamiltonian operators for each classical Hamiltonian function on account of operator ordering.) We can then calculate that , using . Likewise, . In particular, keep in mind that is just some power series in the operators and .↵
- There is obviously a bunch of subtlety bundled up in distinction between stochastic and deterministic for quantum mechanics. We’re just operating intuitively here and won’t solve these issue.↵
- Wikipedia has a neat, strictly classical example: “Perhaps the first known anomaly was the dissipative anomaly in turbulence: time-reversibility remains broken (and energy dissipation rate finite) at the limit of vanishing viscosity.”↵
- The notation here is , with Sine (or any other analytic function) of the partial derivative operator being understood in terms of a power series.↵
- There are some deep issues here with operator ordering and the many-to-one nature of the quantum
*dynamics*to classical dynamics. For quadratic Hamiltonians, the quantum and classical theories differ only by a constant offset, but for higher order Hamiltonians things can be more complicated. We are putting these aside and just trying to make quantum*states*one-to-one with classical states.↵

Great essay. And I’m with you until the very last two sentences, but how do we know that there’s a biggest system that’s closed and pure? That’s another one of those pat implicit assumptions (like hbar->0) that one finds in many of the textbook discussions of quantum mechanics (“just go to the whole universe!”), but being closed and pure is a measure-zero idealization, and would certainly fly in the face of the fact that whenever we enlarge the scope of what we call our system, we generically find a system with vastly more degrees of freedom and whose density matrix is more mixed and whose dynamics is more open.

It’s quite possible that this “Russian doll” sequence of increasingly large (and increasingly open/mixed) system-scopes suddenly terminates once we’ve reached a system that’s sufficiently big, but that’s a big “if”, and lies totally outside the realm of empirical analysis. (How could we ever know?) And this isn’t just totally idle speculation — in generic models of cosmic inflation, for example, it’s not clear that there is a “closed universe” anywhere, and, even if there were, it’s not clear that we could get away with understanding it without a theory of quantum gravity, which may be considerably different from quantum theory as we know it. (Cosmic inflation could be wrong of course, but it’s certainly not logically obvious that it’s wrong — so what if it’s correct?)

If it turns out that there are just open systems out there, some approximately closed but never perfectly closed, others being very much open, then what happens to some of the interpretative models of quantum theory like Everett/DeWitt’s or de Broglie/Bohm’s? Do they survive a world consisting of just density matrices and no pure states or wave functions?

Fair enough, but here’s why it might not really matter: When the system you are considering is a spatial volume, then as you zoom out the amount of entanglement it is generating with the outside (in bits per second, say) becomes small compared to its total volume. Since the “branching rate” (also bits per second, and probably equal to the Kolmogorov-Sinai entropy, more or less) should scale with the volume, it seems the boundary should matter less and less as you zoom out. It never goes completely away, but I don’t need complete isolation in order to argue that you need to be able understand branching without reference to an external system. I just need to point out that the interactions on the boundary is insufficient.

I think both Everett (augmented with consistent histories and a set selection principle) and de Broglie/Bohm survive. Consistent histories works perfectly well with just a density matrix. (And I don’t really understand Everett without a preferred set of consistent histories, or something similar.) I can’t envision a problem with the de Broglie/Bohm using density matrices, but I haven’t checked.

Great post! :-)

I’m a just passing by this blog by accident, so I hope I don’t make any ettiquete mistakes, but…

I have a small comment regarding the above question. Namely, any (tentative) theory of quantum gravity implies that your whole spacetime (with all the matter inside) is a quantum system. Such a thing doesn’t have an environment basically by definition, regardless of any eternal inflation stuff etc. Thus the concept of measurement in QG runs up into the standard “measurement problem”, because there is no decoherence to speak of (as far as I understand). Ditto for quantum-to-classical transition.

Second, I’d appreciate any pointers to some serious description of decoherence in QFT. I don’t understand it as well as in QM, since in QFT the quantum system (imagine an EM-field) is not localized in any region of space, there is no obvious way to talk about its environment, and one should deal with the concept of density matrices constructed from objects that live in Fock space (as opposed to a Hilbert space). Maybe all this can be defined, but I’ve never seen it, and I don’t really understand what would it mean to trace-out over an environment with a variable number of degrees of freedom.

I have a feeling that QM vs. QFT distinctions are being too neglected in the discussions of foundational questions like measurement, Q2C transition, etc. So as a start, I’d like to understand decoherence formalism in the QFT setup. Any nice review papers out there?

Thanks! :-)

Marko

> Namely, any (tentative) theory of quantum gravity implies that your whole spacetime (with all the matter inside) is a quantum system. Such a thing doesn’t have an environment basically by definition, regardless of any eternal inflation stuff etc.

You may know more than me, but I’d be very surprised if quantum gravity made it impossible to define *some* notion of system-environment. (May be it makes it a little more difficult to get a “nice” one in some circumstances, but we already know there’s probably no obvious, nice, eternal environment split, so this doesn’t seem to make things much worse.) Can’t you just trace out certain modes?

Usually the issue with quantum gravity is that there’s no nice time coordinate.

> Second, I’d appreciate any pointers to some serious description of decoherence in QFT.

For just fields, the canonical cite is

J. R. Anglin and W. H. Zurek. “Decoherence of quantum fields: Pointer states and predictability” Phys. Rev. D 53, 7327 (1996).

http://journals.aps.org/prd/abstract/10.1103/PhysRevD.53.7327

For everything to do with decoherence, QFT, and gravity, I suggest Kiefer, especially

C. Kiefer. “Decoherence in quantum electrodynamics and quantum gravity”, Phys. Rev. D 46, 1658 (1992)

http://journals.aps.org/prd/abstract/10.1103/PhysRevD.46.1658

Also, I highly recommend Kiefer’s contribution to this book, a standard reference:

Erich Joos, H. Dieter Zeh, Claus Kiefer, Domenico J. W. Giulini, Joachim Kupsch, Ion-Olimpiu Stamatescu.

“Decoherence and the Appearance of a Classical World in Quantum Theory” (2003)

http://www.amazon.com/Decoherence-Appearance-Classical-Quantum-Theory/dp/3540003908

> one should deal with the concept of density matrices constructed from objects that live in Fock space (as opposed to a Hilbert space). Maybe all this can be defined, but I’ve never seen it, and I don’t really understand what would it mean to trace-out over an environment with a variable number of degrees of freedom.

Well, usually what happens for second-quantization is that you treat the modes as systems, and those have density matrices. That’s fine mathematically, of course, but it then makes it harder to match up to our intuitive notion of a system. (In a first quantization picture, the “system” of a baseball is just the atoms that compose the ball, or perhaps their center of mass coordinate. But in a second quantization picture, “the baseball” is just an excitation moving on top of the modes, so it’s not something you can really trace out.)

Personally, I don’t think that the system-environment distinction is going to be fundamental. My best guess right now is that we’re instead going to rely on a microscopic structure (locality, probably) plus a notion of records (arXiv:1312.0331).

“As you zoom out the amount of entanglement it is generating with the outside (in bits per second, say) becomes small compared to its total volume”

Does this run into problems with the holographic principle at some point? IIUC, in some limit, the number of bits of information in a region scales with the area of the boundary, not the volume.

It definitely seems possible, but personally I would be pretty surprised if something like that were required for understanding classicality. The first thing I would check is whether you actually run up against those limits if you only look at “pretty big” systems (e.g., can we account for branching in terms of entanglement being generated by light crossing an imaginary bubble around the solar system?). I don’t know the answer.

If you augment Everett with Consistent Histories and Set Selection Principle, aren’t you adding a lot of additional structure that is not found in QM?

Right. So my basic claim is that this structure is

alreadynecessary for using QM to predict experimental outcomes, only instead of being clear math written down on paper, it’s a vague body of intuitions that have to be painfully and haphazardly drilled into physicists during their training. Every time we try to understand quantum systems that are conceptually more complicated (e.g., open systems, or cosmological systems that contain ourselves), we find that our previous intuitions break down. Then we have to thrash around with thought experiments until we form new intuitions that reach reflective equilibrium. I think we should realize that we’re doing this, and then we should find a mathematically precise principle that actually captures what those intuitions are gesturing at. This would allow us to talk about more extreme situations, like the branch collisions I conjecture happen during heat death.In other words: I think QM without a consistent-set selection principle

hidesthe additional structure that would count against it according to Occam’s razor. (More precisely, it hides the descriptive bits that count against it according to Solomonoff induction.) By dragging that out of our intuitions and into math, we have a chance to distill it to a simple principle.I have a very long essay defending the necessity of the set selection along these lines which I am almost ready to post. It’s directed at what I consider to be the best alternative: that “Copenhagen done right” (i.e., consistent histories) doesn’t require any more input in the same way that statistical mechanics doesn’t require a preferred coarse graining, as championed by Bob Griffiths.

Pingback: Back from Break | Not Even Wrong

Could it be that your problem with the correspondence principle, comes about because you are trying to treat the wavefunction as the physical reality? If you keep to the classic Copenhagen view – observables are reality, wavefunctions are a predictive device – then showing that expectation values follow the classical equations of motion sounds like “mission accomplished”.

You could certainly try and move in that direction, but it’s still a totally incomplete argument (and, in my opinion, ultimately doomed). Reasons: (1) Observables are not the same thing as expectation values, and in fact the expectation value may not even be in the spectrum of the observable. (2) The expectation value changes continuously with time, while we know observables can’t sensibly have values at a continuum of times due to interference. A good hint at what is breaking comes from the fact that physical measurements determine only bases, not operators, because measuring two non-degenerate commuting observables are physically equivalent processes.

http://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.052110

If you wanted to rescue this, you’d start choosing preferred bases, not operators, and preferred time steps. And then you’d end up with consistent histories but still lacking a set selection principle.

This is a nice account, but the Wigner function is not especially strange classically. It certainly appears in classical signal processing whenever we use Fourier analysis. For the deterministic signal case, see, for example, “Time-Frequency Distributions-A Review”, Leon Cohen, Proc. IEEE 77, 941 (1989), http://dx.doi.org/10.1109/5.30749. As you point out, comparison should be of like with like, at a probabilistic level, to which end, hopefully not too abstractly, a state over a commutative algebra of observables equally generates a Hilbert space (by using the GNS construction) as does a state over a noncommutative algebra of observables, then it is a contingent matter whether we can construct physical systems that correspond to superpositions of vector states. One view of Planck’s constant is that it is a measure of a Lorentz invariant analogue of thermal fluctuations in classical models as much as it is in quantum models; on which view setting Planck’s constant to zero is not relevant to the difference between classical and quantum. But this is as sadly incomplete for detail as any other account.

Something with the same mathematical form as the Wigner function can be defined for any complex-valued (or real-valued) function over R (or R^N). In quantum mechanics this is the wavefunction psi(x), and in the paper you link it is an arbitrary function over time, f(t), like a classical sound wave. That’s because the wavefunction–>Wigner-function operation is just a modified Fourier transform. So I don’t really get what you’re saying. (After all, the *normal* Fourier transform can also be defined for a sound wave, but that doesn’t make the momentum wavefunction any less quantum mechanical.)

I think we can say we’re more-or-less on the same page wrt Fourier transforms and the Wigner transform, so I think best to set that aside. For the Hilbert space aspect, the best I can offer as a blog comment, albeit not as good as I would like, is “Equivalence of the Klein-Gordon random field and the complex Klein-Gordon quantum field”, Peter Morgan, EPL, 87 (2009) 31002, http://iopscience.iop.org/0295-5075/87/3/31002/ (which is open access). That there is a relationship of some kind is perhaps sort-of clear, but what the relationship might be in detail is definitely unclear, particularly for interacting fields and even more so for spinor fields.

What comes out of the account in this EPL paper is perhaps just something like my last comment, that it is at least a little bit contentious from this point of view that setting Planck’s constant to (or towards) zero is relevant to the difference between classical and quantum. Re-reading your blog post, I see that it can be read as questioning whether the limit is a good way to approach the classical limit; if so, and since that’s the second spur that led me to comment, we can set that aside as well, and we can almost pretend that I didn’t.

After eq (2), is “some hand-wavy discussion” supposed to be a link? If so, it appears to be broken…

Yes, thanks much! Fixed.

There are at least two ways you could reasonably take ħ → 0. You could take ω → ∞ while keeping n and E = nħω constant, or you could take n → ∞ while keeping ω and E = nħω constant. Starting with a quantum theory of light, I’d expect to get geometric optics in the first limit and Maxwell’s equations in the second limit. It seems like you only talk about the first one. Does the second one work any better?

Great question. This involves two issues: (1) How are the dynamics modified as ħ → 0? and (2) what happens if you have indistinguishable particles?

The problem I raised in the post (negativity in the Wigner function) has to do with the ħ → 0 limit at a fixed time, so it appears to survive regardless of how you simultaneously modify the dynamics. (I guess it’s conceivable that you could set up the dynamics so that decoherence became arbitrarily fast as ħ → 0, which would sort of be a solution, but I don’t know how this would work and the photon certainly doesn’t do it.) So let’s look at the effect of indistinguishable particles in the two cases you mention.

For geometric optics with n=1, you just have a single (distinguishable) particle, so my post applies as you note. For fixed but finite n you need to keep track of symmetrization, but I can’t see how it will avoid the basic problem. You can always consider a fixed-n subspace, and you still have coherent superpositions (e.g., NOON states) with negative Wigner functions as ħ → 0.

The Maxwell equation case is interesting because now you’re limiting to a field rather than a particle. But then our classical parameters are just the field strengths at each location in space, rather than the position and momentum of a single particle. For finite ħ, a classical field is approximated by a coherent state of E/ħω photons (just like a classical point particle is approximated by a wavepacket whose Wigner function covers an ħ-sized patch of phase space). But it’s possible to create a superposition of two field configurations just like a superposition of two positions for a point particle. And the corresponding classical limit still has a grossly negative Wigner function and correspond to no classical field configuration.

Could you add links to parts 1-3? Thanks.

Good idea, thanks.

Jess,

Something off-the-topic. What software did you use to draw the diagrams?

–Ajit

I use Powerpoint to draw simple diagrams like the first one, and Mathematica for actual plots like the Wigner function.

Hey Jess,

Wonderful post. Could you comment on why you picked the Wigner function and not the Husimi function. The Husimi (which is basically a smeared version of Wigner) has better properties in general and contains the same information. It does not have interference terms when h->0, so no decoherence seems necessary if this function is put on a more fundamental level.

Just a quote from this article I found: (DOI 10.1140/epjd/e2010-00233-2)

“mapping Hermitian operators A to phase-space functions A(x, p), does not give the correct correspondence between classical and quantum operators. Among these circumstances one encounters an important operator, the square of the Hamiltonian, in which the Wigner function yields wrong results. Husimi distributions are free from such defects. Smoothing may thus, in some special situations, “improve” on Wigner’s descriptive capability.”

So maybe the importance you put on the interference terms in Wigner could just be an artifact of a definition? I don’t see why I would pick a certain function which has problems in the classical limit over a function which does not have this problem.

I’m not an expert on these functions myself though, just curious about your reasoning.

Cheers,

Jasper

Great question. I explained why I think the Wigner function is clearly special when compared to the Glauber-P or Husimi-Q functions in these earlier posts:

https://blog.jessriedel.com/2014/09/22/in-what-sense-is-the-wigner-function-a-quasiprobability-distribution/

https://blog.jessriedel.com/2014/04/01/wigner-function-fourier-transform-coordinate-rotation/

In particular, the Wigner function is just a Fourier transform of the density matrix in a rotated basis, while the P and Q functions necessarily require the introduction of a scale (i.e., the spatial width of the Gaussian kernel used for convolution). In fact, there exists generalized P and Q functions for different kernels. So the question is this: If there are *lots* of different phase space distributions we can use to represent the quantum state, why should we care that some have no good ħ → 0 limit if others do?

Intuitively, we know there has to be an issue with using something like the Husimi-Q function because two *orthogonal* quantum states (the |L>+|R> and |L>-|R> states, where |L> and |R> are individual wavepackets) smoothly approach the same mixed state ( |L〉〈L|+|R〉〈R| ), as ħ → 0. This is the many-to-one problem exemplified. You can see how this cheating works even more clearly if you move to Fourier space of the Wigner function, where the two pure states |L>+|R> and |L>-|R> differ appreciably only in the large-frequency region. The respective Husimi-Q functions are obtained by just erasing the differences by hand! Indeed, from the paper you cite: “The Husimi distribution washes out quantum interferences at the price of hiding important semiclassical structures.” :)

Abstractly, the reason Wigner is special that the *inner product* in the Wigner representation is *pointwise*, and this “respects” (in a way I wish I knew how to make precise) the the limit we are taking toward a classical distribution. The squared inner product of the wavefunctions corresponding to two Wigner functions W1(x,p) and W2(x,p) is just the integral of W1(x,p)*W2(x,p) over phase space. Nothing like this exists in the Husimi-Q representation because those high-frequency modes are critical for determining the inner product.

With regard to the Wigner function for the squared Hamiltonian you mention, I think this is only an issue if you want to specify a mapping that enables the free interchange of classical equations (like certain statistical mechanics equations involving H^2) with quantum ones. This is an admirable but more ambitious program than just identifying the limit of the state space, and brings up the issues I mention in footnote ‘e’. For the details of what exactly can go wrong with H^2, your cite (Pennini and Plastino) references Reichl, who in turn references Barut:

http://journals.aps.org/pr/abstract/10.1103/PhysRev.108.565

Thanks for the explanation! I’ve got to think a bit more about it. ;)

I have to object: To use Husimi-Q hides and supresses nothing, at least not more than is hidden by the usual (rho=psi^*psi). Because the Husimi density is simply this probability for the holomorphic representation (psi(p,q) = f(z) e^{-frac12 zbar{z}}), which is a standard representation of the canonical commutation relations (with z and (partial_z +bar{z}/2) as (a^dagger, a), thus, contains everything.

Moreover, it has a clear connection with measurement: measure (p+p_0, q-q_0) instead of p,q, which commute, with a second particle in the oscillator ground state. Which defines the holomorphic subspace by (a_0psi = partial_{bar{z}} + z/2psi = 0). See Holevo – Probabilistic and statistical aspects of quantum theory, North Holland, Amsterdam 1982.

And what goes wrong with the Koopman-von Neumann embedding of classical Hamilton evolution as an unitary evolution into the Hilbert space of the phase space? (bar{H}=ihbar(partial_p H_{,q} – partial_q H_{,p})) ? If it is in the article, I have no access to it.

As far as I can tell, you are arguing that because the Husimi-Q function contains all the information for mathematically reconstructing the quantum state up to a phase, it is sufficient to prove that the Q function approaches a classically evolving probability distribution. As I already mentioned, this is a weird argument, since there are many objects that can be used to mathematically reconstructing the quantum state, but they approach incompatible things!

The severe problem with your information-based argument is that it breaks down in the ħ → 0 we care about. Despite your protests, the Q function quite literally *suppresses* the sub-ħ modes that describe coherence, in the precise sense that the Q function is obtained by convolution the Wigner function with a Gaussian kernel. Yes all the information is mathematically still there for finite ħ, but

this suppression becomes exact as ħ → 0. The informations is fully destroyed, which is the origin of the many-to-one mapping problem.Your second argument draws of what’s wrong. Yes,

if you assumea hypothetical measurement with respect to a particular overcomplete basis of wavepackets, which in particular requires the selection of a length scale, then you can show that those hypothetical measurement outcomes follow classical trajectories. But of course we know that, for any fixed measurement basis, manydistinctquantum states will give the same measurement outcomes. So this is not a good argument to show that one formalism limits to another (unless youexplicitlydescribe the many-to-one behavior).Hm, sounds like a misunderstanding. I would not suggest to do the classical limit using the Husimi-Q itself, taken alone. But with the holomorphic representation of the canonical commutation relations, which is as good as any (once they are all isomorph).

The connection between these two things is that the Husimi-Q is $Q=psi^*(p,q)psi(p,q)$. Of course, you should get rid of the phase only if you are sure you no longer need it. As long as you use the wave functions $psi(p,q)$, you have the full quantum theory. What something is lost if you restrict yourself to the density is something I concede, but there is no good reason to do this.

The next point is to define classical Hamiltonian evolution on the full space of wave functions, using Koopman-von Neumann. partial_t psi = -{psi,H} is an unitary evolution on the full space. An then consider the relation between the two – by projection on the holomorphic subspace psi(p,q) = f(z=p+iq) e^-zz^*/2. That’s a variant of antinormal operator ordering.

So, you have the full classical theory and the full quantum theory using the same mathematical language, with the only difference between them that the quantum playing ground is a subspace of the classical one, thus, classical operators have to be projected onto this subspace.

Of course, once you do hbar to 0, you do it in a way which puts Delta p and Delta q to 0 together. And in this limit the subspace becomes more and more close to the full space, because the coherent states, which are the delta-functions of the holomorphic subset, become more and more close to delta-functions on the whole, classical space.

So, the limit becomes the limit of an embedding. The phase does, BTW, give information even about the classical solution, which is sometimes not contained in the density function itself (like about the probability flow in stable states). So, it is, of course, useful to consider the density, but not useful to throw away the phase, even in the classical limit.

BTW, I would recommend you to support MathJax, a in the header and one can write simple TeX code.

OK. In your first comment you said (emphasis mine)

> To use Husimi-Q hides and supresses nothing,

at least not more than is hidden by the usual(rho=psi^*psi). Because the Husimi density is simply this probability for the holomorphic representation…and now you’re saying

> would not suggest to do the classical limit using the Husimi-Q itself, taken alone. But with the holomorphic representation of the canonical commutation relations,…The connection between these two things is that the Husimi-Q is $Q=psi^*(p,q)psi(p,q)$. Of course, you should get rid of the phase only if you are sure you no longer need it.

So you have changed your complaint. Indeed, the holomorphic representation psi(p,q)

doesinclude the important coherence information that Husimi-Q suppresses, so it could plausibly work for a good ħ → 0 limit. I don’t know because I haven’t thought about it. In any case, throwing away a complex phaseas a function over phase-spaceisverydifferent than merely throwing away anoverallphase for the wavefunction like you said in your first comment (“rho=psi^*psi”). And it is certainly not what Jasper was talking about when he original commented.> The next point is to define classical Hamiltonian evolution on the full space of wave functions, using Koopman-von Neumann.

Koopman-von Neuman mechanics is indeed a plausible route to a rigorous ħ → 0 limit, one that I have considered working with myself. But the key difficulty is contained your remark:

> the only difference between them that the quantum playing ground is a subspace of the classical one,

Indeed, most quantum states do not map to classical states in the ħ → 0 limit, so in order to recover classical mechanics we need to show that our quantum states are

driveninto the classical subspace. This is adynamicalquestion, and it (almost assuredly) relies on decoherence. Which is the point of my post.In other words: if you don’t explain why the non-classical states are dynamically eliminated, then you have the many-to-one mapping that I talk about.

> The phase does, BTW, give information even about the classical solution, which is sometimes not contained in the density function itself (like about the probability flow in stable states).

This is a very interesting observation, although I’m not sure it is necessary to keep that information.

> BTW, I would recommend you to support MathJax…

I don’t think MathJax plays well with the current version of the Disqus commenting system. Right now it’s not worth switching to a different commenting system to enable it. Let me know if I’m mistaken or you can find a work around.

So this was really a misunderstanding, because writing down the rho=psi^*psi I already had in mind the psi^*(p,q)psi(p,q), and a similar “not sure it is necessary to keep that information” about the importance of the phase in the classical limit. But this was to short and left the possibility to interpret it as |psiXpsi|, which throws away only the general phase factor.

But some disagreement remains – the quantum states are a subspace of classical states, those defined by holomorphic (modulo the exp-zz^*/2) wave functions, thus, every quantum state gives a classical state (in Koopman-von Neumann). The phase is classically unimportant because different phases never meet, it is only a flag taken by the point along its trajectory. The projection on the quantum subspace leeds to the necessity to add different phases.

Any necessity for decoherence I cannot see here. It is an important tool for identifying where superpositional effects are important and where one can ignore them – inside quantum theory. But IMHO it has no fundamental importance.

I find trying to understand you writing extremely laborious. I will try explain how my posts addresses (my best guess at the meaning of) your second paragraph, but I’m not going to be able to continue this conversation further.

We already know that wavepackets follow classical trajectories in the exact ħ=0 theory. One representation of this theory is in the restricted form of Koopman-von Neumann mechanics, where if you start with an initial mixture of ideal point-like wavepackets

then the states remains expressible as a mixture of point-like wavepackets for all time. In particular, the entropy of the probability distribution does not increase. No discussion of smoothing by a Gaussian kernel, exp-zz^*/2, is needed.The problem is that for finite ħ, you can only have

near-point-like wavepackets, and these quickly become distorted by chaotic dynamics into grossly quantum mechanical states on short time scales. The fact that you can declare a many-to-one mapping between these grossly quantum states and the classical ones (which is implemented by convolving with a Gaussian, aka by implementing the Segal–Bargmann, aka by computing the Husimi-Q function) does not help the situation. Likewise,enlargingyour “classical” space by fiat to includeallpossible allowed states in KvN mechanics is not the same thing as deriving classical mechanics, because these states to not exist classically!Here is yet another way for you to see that that there is an issue: classical mechanics (including KvN mechanics with initial states restricted to mixtures of point-like wavepackets) is time-symmetric

but the ħ → 0 of quantum mechanics is not. You can’t eliminate the stochasticity by taking the ħ → 0 limit because it takes place on a time scale set by the Lyapunov constant, not a polynomial in ħ.Seems I have to write all this down in more detail, anyway this would be a good idea. So here simply the notes where I disagree:

1. The point of the large KvN is to have the classical theory in a similar form. It is not a derivation, but allows to study the relation between different theories.

2. This relation is one of embedding quantum states into classical states as a subspace. Classical evolution leaves this subspace, so quantum evolution can be obtained by restricting classical evolution to the quantum subspace.

3. The non-holomorphic states exist in classical theory, but not in quantum theory, because they are forbidden only by the uncertainty relations, which are quantum.

4. In my approach the quantum theory is obtained by projecting the classical evolution on the quantum subspace. This introduces some stochastic element.

5. But Lyapunov is innocent here, because the importance of Lyapunov comes (if I identify this correctly) from decoherence, which is (or at least seems to me) irrelevant, because it is about a different question.

If a more complete presentation is ready, I will put it on my website and put a link here.

> If a more complete presentation is ready, I will put it on my website and put a link here.

Great! Looking forward.

Many people, including myself, would point out that the limit “hbar to zero” contains decoherence. Indeed, every expression for the decoherence time goes to zero in this limit!

In other words, taking the limit “hbar to zero” increases the effects of decoherence enormously, so that all the coherent effects, such as negative Wigner functions, disappear instantaneously.

The limit “hbar to zero” must be used consistently, both on the microscopic system and on its coupling to the environment. When this is done, classical physics follows.

> Indeed, every expression for the decoherence time goes to zero in this limit!

Exactly! The decoherence time typically goes to zero in many toy models (i.e., the decoherence rate goes to infinity). However, the “branching rate” — the rate at which entropy is generated — should not go to zero. This is because the steady creation of superpositions generates entropy when those superpositions decohere, even if that decoherence happens essentially instantly. Furthermore, this only becomes possible to talk about when you use mixed states and allow for open systems. This is the sense in which indeterminisim is an anomaly.

This is all correct. But the main point remains: the limit hbar to zero leads, unambiguously, to classical mechanics. In your post you seem to suggest the opposite (if I understand correctly).

Classical mechanics is deterministic. My post claims that the ħ → 0 limit of quantum mechanics is not yet rigorously defined, but that when it is it will be irreducibly nondeterministic because the branching rate remains finite as ħ → 0. Therefore, classical mechanics is not the ħ → 0 of quantum mechanics.

I like to think of the path integral h->0 limit not as a regularisation of some dirac delta but as a regularisation of delta’. This is because we are looking for extremal, say V'(x)=0, and then it is the delta’ the adequate distribution to extract V’

I guess you’re talking about the distributional derivative for a Lagrangian (rather than Hamiltonian) formulation. Could you explain a little more on what you’re saying? Presumably you still expect indeterminism that doesn’t go away as ħ → 0, so I imagine you’d need something like branching Feynman paths.

Yes, I was thinking on the distributional derivative. It is very typical that when explaining ħ → 0 in Feynman integral teachers stress the similarity with delta function, but really what we are looking is for an extremal of the Lagrangian, so its derivative makes more sense.

I remember I tried to formalise this 20 years ago at the start of my graduate years :-) Also, at that time I though about some connection with the “Tangent Groupoid”, a idea from Connes to formalise simultaneusly the tangent space of classical mechanics and the space of operator of quantum mechanics.

Yea, my intuition is that you can could take either route and get to the same place.

Dr. Reidel, I have an OT question:

As far as my (admittedly limited) understanding of Feynman path integrals go: In the simplest case of just one free particle propagating with no interactions, they are a sort of integration over the Green’s function for the underlying partial differential equation (the Schrodinger equation, or Klein Gordon equation or something. (At least, this is what they should be if you want equivalent behavior!))

Basically, it’s huygen’s principle applied to the Schrodinger equation (or relativistic variant).

However, whenever I see people using them, (Feynman, or anyone else), no one uses any sort of attenuation factor for the propagation of a spherical wave-front. A delta function should spread out as a spherical wavefront with a 1/r^2 factor or something to the amplitude at a point a distance r away, but I only ever see something like

$Psi(r2) = int^{vec{r1}}_{vec{r2}} Psi(r1) exp(i*p^2/2mh*t) * vec{dr1} $

or

$Psi(r1) = int^{vec{r1}}_{vec{r2}} Psi(r1) exp(i*upsilon*(r2-r1)) * vec{dr1} $

Why can you get away with this? Wouldn’t this distort your results? (It seems like you would have to renormalize the wavefunction at each new position or time, at least.)

“Thus, the mapping from quantum states to classical configurations would have to be many-to-one.”

Isn’t this basically a restatement of the EWG interpretation? There is no collapse of the state vector, so there’s no unique next state of a chaotic system.

EWG = Everett Wheel Graham? (I had trouble even Googling that one.) I’m not familiar with the distinction between EWG and Many Worlds. If you’re asking whether my argument above is a restating of Many Worlds, the answer is no. If we had a rigorous ħ → 0 limit that was many-to-one, that would be compatible with an interpretation that the many classical configurations were mere possibilities, rather than “real”.

It looks like this article was unintentionally cut off. I want to read it!

Yes, thanks very much for catching this! Should be fixed now.