Even when two measurements are not space-like separated, they very often commute (to very high accuracy) simply because the coupling between them is very weak. Different detectors within ATLAS often correspond to commuting measurements for the same reason that time-like separated measurements made on different continents do. And even when they don’t, there’s an objective ordering guaranteed by their time-like separation.

]]>& # 60 ;

only without any spaces. Took me a couple of tries on my first attempt.

Hope you don’t mind that I edited your comment to fix this for you đź™‚

]]>My questions came out of a current interest in digital (that is, binary, on a computer) records of experimental results. An Analogue-Digital Conversion is effectively constructed as a set of projections [is the recorded value associated with the real-valued observable in the range , or more generally in a set (a set in the complex plane if is normal)? Answer=0 or 1; lots of such questions gives us, say, a 16-bit recorded result in computer memory; any such process is determinedly neither linear nor continuous, yet they’re routine, millions of times over, at CERN, say].

If we have an apparatus that we say measures and as two bits, we might think of that as measuring , which is OK if , however if the eigenvalues of in general will not be . We could say that represents the ‘th eigenvalue of , or we could say that we have applied a second nonlinear ADC process to obtain our two bit record (or we might say something altogether different?). [Again at CERN, computer records are as often clearly of events that are at time-like separation and hence a priori do not commute.]

It has always seemed to me rather more interesting to know what we have measured than what the measurement results are, which is as if to say in theory-speak that the eigenvectors of an operator give us more information than the eigenvalues, which is, I take it, a crude paraphrase of part of your blog post.

Perhaps I need <rant>…</rant> round all that? Answer any part of it that you find it useful to contemplate, but leave it alone otherwise. ]]>

Yea, I’m very comfortable with the fact that the set of things we measure with an apparatus is not closed under addition for the same reason that I’m fine that the sum of a position and a momentum is undefined. Operationally, two things that I can measure don’t have to have a well defined sum. And mathematically, it shouldn’t bother us that the sum of two different bases of the same vector space isn’t defined.

The thing I’m trying to do is (1) identify the set of mathematical structures that correspond to what we can physically measure and (2) point out that the Hermitian operators are not — and are not in 1-to-1 correspondence with — that set. In particular, I didn’t claim that normal operators are measurable, I claimed that they are *just as measurable* as Hermitian operators. But maybe it’s better to simply emphasize that measurements should be identified with PVMs/POVMs.

<rant>Regarding undergraduates: I remember as an undergraduate being *utterly baffled* by the bizarre declaration that Hermitian operators corresponded to observables. (In some sense, it was the ultimate distraction since a decade later I am still following the path it sent me down!) Such an axiom had no counterpart in classical mechanics, and it was never justified. In my opinion, the people who managed to avoid this distraction were just learning to accept things without understanding them. This ability to suspend disbelief is great if you want to create good graduate student slaves for doing computations, but not so good for training the next generation of physicists. </rant>

The question of whether you can operationalize the notion of measuring the sum of two normal operators is an interesting one. I don’t know the answer. I haven’t even seen someone try to operationalize the sum of two *Hermitian* operators. If you have a device that can measure spin Z and spin X, there doesn’t seem to me to be any reason for the sum of those two to be measurable with that device. I interpret this as more evidence that “the set of things we can measure” should not be identified with the Hermitian operators.

Indeed, it was only after I had been in graduate school for a while that I discovered a bit of the historical background behind the *mathematical* importance of Hermitian operators, especially as elucidated by von Neumann. The mistake wasn’t the emphasis on understanding these objects, but rather the terrible attempt to identify them with the physical process of measurement.

Secondly, if we measure the spectra of Hermitian operators , , and , comparison of the latter (taking various real values of ) with the first two gives us some information about the relative orientation of the eigenvectors of and of (and more so if we consider more Hermitian operators, though the computation looks daunting), so the eigenvalues are significant at least to that extent?

Though I agree that a focus on the spectra and the relative orientations of eigenbases of measurements would not be amiss, these two aspects taken together might perhaps be enough to make it distracting to introduce your “simple observations” to undergraduates?

Operationally, however, I’m curious whether I know how to measure just because I know how to measure both and ?

]]>Note that taking the limit of an infinitely fine grid still requires a notion of phase-space points, which is problematic in quantum mechanics. And indeed I couldn’t find a definitive definition of a quantum KS entropy after a quick search, but see a discrete definition by Alicki and Fannes and (unnecessarily mathematical?) pessimistic discussion starting from the bottom of page 8 by Wehrl.

(Amazingly, there’s a PRL where someone seems to have taken the opposite approach of extending phase-space trajectories to arbitrarily small scales in quantum mechanics using Bohmian trajectories and then building up Lyapunov exponents from that.)

]]>Regarding the classical chaos, is it possible to define classical chaos without referring to phase space trajectories? Asked differently, can one quantify classical chaos from the (time evolution of) density distribution only?

If the answer to this question is yes, then what would go wrong if we simply replace density distribution by density matrix?

]]>Making use of slightly more abstract mathematical concepts to truly understand stuff that you normally you are taught only in a “practical” way is one of the most satisfying feeling I have while studying physics.

]]>https://arxiv.org/abs/1003.1363

Bogoliubov pointed out the the naive Gibbs ensemble is problematic when spontaneous symmetry breaking (SSB) is present. The cure, known as “quasi-averaging” in Russian literature, is to add an infinitesimal symmetry breaking field. Berry’s essay makes me wondering how far one can push the analogy between SSB and the emergence of classical mechanics. Could the emergence of classical mechanics be understood as some sort of “symmetry breaking” among all possible quantum basis?

]]>(Btw, if you include “[latexpage]” (without the quotes) in a comment, the latex will be rendered. I’ve edited your comment to add it.)

]]>,

where is the external field that breaks the global symmetry, and is the number of degrees of freedom. Bogolyubov pointed out that the correct order is to first take the thermodynamic limit and then the zero-field limit. I wonder if one can make a similar statement about the emergence of classical mechanics. For instance,

,

where is the coupling between the instrument / observer / bath.

]]>I find Weinberg much more palatable. At least he understands the big structure of the arguments at stake. By the way, Weinberg also devoted an entire chapter to the measurement problem in his textbook, Lectures on Quantum Mechanics.

]]>