Non-Markovianity hinders Quantum Darwinism
Fernando Galve, Roberta Zambrini, and Sabrina ManiscalcoWe investigate Quantum Darwinism and the emergence of a classical world from the quantum one in connection with the spectral properties of the environment. We use a microscopic model of quantum environment in which, by changing a simple system parameter, we can modify the information back flow from environment into the system, and therefore its non-Markovian character. We show that the presence of memory effects hinders the emergence of classical objective reality, linking these two apparently unrelated concepts via a unique dynamical feature related to decoherence factors.
Galve and collaborators recognize that the recent Nat. Comm. by Brandao et al is not as universal as it is sometimes interpretted, because the records that are proved to exist can be trivial (no info). So Galve et al. correctly emphasize that Darwinism is dependent on the particular dynamics found in our universe, and the effectiveness of record production is in principle an open question.
Their main model is a harmonic oscillator in an oscillator bath (with bilinear spatial couplings, as usual) and with a spectral density that is concentrated as a hump in some finite window. (See black line with grey shading in Fig 3.) They then vary the system’s frequency with respect to this window. Outside the window, the system and environment decouple and nothing happens. Inside the window, there is good productions of records and Darwinism. At the edges of the window, there is non-Markovianity as information about the system leaks into the environment but then flows back into the system from time to time. They measure non-Markovianity as the time when the fidelity between the system’s state at two different times is going up (rather than down monotonically, as it must for completely positive dynamics).
An Introduction to Quantum Error Correction
Daniel GottesmanQuantum states are very delicate, so it is likely some sort of quantum error correction will be necessary to build reliable quantum computers. The theory of quantum error-correcting codes has some close ties to and some striking differences from the theory of classical error-correcting codes. Many quantum codes can be described in terms of the stabilizer of the codewords. The stabilizer is a finite Abelian group, and allows a straightforward characterization of the error-correcting properties of the code. The stabilizer formalism for quantum codes also illustrates the relationships to classical coding theory, particularly classical codes over GF(4), the finite field with four elements.
The density-matrix renormalization group in the age of matrix product states
Ulrich SchollwoeckThe density-matrix renormalization group method (DMRG) has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. In the further devel- opment of the method, the realization that DMRG operates on a highly interesting class of quantum states, so-called matrix prod- uct states (MPS), has allowed a much deeper understanding of the inner structure of the DMRG method, its further potential and its limitations. In this paper, I want to give a detailed exposi- tion of current DMRG thinking in the MPS language in order to make the advisable implementation of the family of DMRG algo- rithms in exclusively MPS terms transparent. I then move on to dis- cuss some directions of potentially fruitful further algorithmic development: while DMRG is a very mature method by now, I still see potential for further improvements, as exemplified by a num- ber of recently introduced algorithms.
Another excellent introduction, this time to matrix product states and the density-matrix renormalization group, albeit as part of a much larger review.
The Logic of the Past Hypothesis
David WallaceI attempt to get as clear as possible on the chain of reasoning by which irreversible macrodynamics is derivable from time-reversible microphysics, and in particular to clarify just what kinds of assumptions about the initial state of the universe, and about the nature of the microdynamics, are needed in these derivations. I conclude that while a 'Past Hypothesis' about the early Universe does seem necessary to carry out such derivations, that Hypothesis is not correctly understood as a constraint on the early Universe's entropy.
This paper is interesting because it casts Jayne’s method as imposing the “simple” initial condition (i.e. low algorithmic entropy of description, I guess?) at an arbitrary time at the beginning of when we start analyzing a system.
Under this interpretation of Jayne’s work, the max-entropy principle is predicting anti-thermodynamic behavior (i.e. a reversed 2nd law) for all systems before we start analyzing them. Mostly a result of Wallace’s realist stance.
- Wallace claims there is a difference between the traditional “Low Entropy Past” hypothesis of Boltzmann and everyone else, and his personal “Simple Past” hypothesis. This is the crux, but I don’t understand yet how these are different.
Some key question: what would the world look like if the simple initial condition were at a time only modestly in the past rather than at the very beginning?
What would our records look like? Could we look at evidence and determine the exact time? What if there were multiple times for multiple systems? What would that look like?
In the video, Wallace says it’s wrong to attribute our choice of coarse-graining to those variables that we find “interesting”. (I might find the value of the stock market interesting, but that doesn’t mean I can integrate everything else out and find dynamics questions for it.) Rather, he says, it’s important that the choice of coarse-grained variables don’t throw away key information that allows one to get evolution laws. (I.e. local gas densities might have predictive laws, but a random coarse-graining function on 3N positions and velocities will not).
But this isn’t a complete story since we obviously do throw away some information (e.g., the higher level laws are often stochastic) and it’s not clear that there aren’t alternative variables we could choose that do have predictive laws but are unrelated to our experience or anything we care about.
He says it’s wrong to summarize the past hypothesis as “the early universe was in a low entropy state” because that’s a statement about it being in a certain macrostate associated with low entropy. Rather, the past hypothesis is about assuming the early universe isn’t very carefully correlated at the microlevel to conspire today to cause statistical mechanics to break (in the same way that if you took an irreversible process and looked at it’s time-inverse, it would look like there were a bunch of carefully fine-tuned conspiracies).
But is that true? Hypothesizing that the early universe had a region high temperature separate from one at low temperature (which is low entropy) is the sort of thing we need to get irreversible laws, right? If we were to hypothesize merely that the state of early universe didn’t have careful fine-tuned conspiracies, but was otherwise high-entropy, then we would already be in heat death now. It’s true that the no-conspiracy hypothesis would guarantee we’d persist in thermal equilibrium, rather than miraculously evolving into a low-entropy state, but that’s not enough to explain the world around us. (So maybe he’s not trying to explain the world around us, and just trying to explain the time-irreversibility of macroscopic laws? I don’t think so, because if we were and continued to be in thermal equilibrium, then the macroscopic laws would be time-reversible!)
- Likewise, he says “the fact that our macro laws are entropy increasing means it’s not very difficult to infer from the historical record that the past had lower entropy”. No way! The macro laws are entropy non-decreasing, but (without a past hypothesis) the most likely state which agrees with a mid-entropy present-day observation is a conspiratorial high-entropy state in the past.
- This paper is interesting because it casts Jayne’s method as imposing the “simple” initial condition (i.e. low algorithmic entropy of description, I guess?) at an arbitrary time at the beginning of when we start analyzing a system.
LaTeX in commentsInclude [latexpage] to render LaTeX in comments. (More.)
- PhysWell (13)
- Comments on Weingarten's preferred branch (19)
- Jess Riedel (5) Unfortunately, I still don't think this is a great framing of the problem that... – Dec 22, 11:57 AM
- Jess Riedel (4) > The derivation of branching we agree remains an incomplete project... But for sure... – Dec 22, 11:56 AM
- Jess Riedel (3) I would very much like to read your 1973 preprint! Please re-consider uploading it.... – Dec 22, 11:55 AM
- Jess Riedel (2) > There is an ensemble of possible final states. I just reach my hand... – Dec 22, 11:55 AM
- Jess Riedel (1) > Construction of branches from environmentally induced decoherence... is intrinsically approximate and requires the... – Dec 22, 11:50 AM
- Don Weingarten Finally, I think you are right that my proposed account of the preferred basis problem... – Dec 17, 5:18 PM
- Don Weingarten 4) As far as eventual thermalization of everything, I pretty much agree with your list... – Dec 17, 5:06 PM
- Don Weingarten 3) Consistent histories: Ugh. A sore point. I think I am actually by some measure... – Dec 17, 5:02 PM
- Don Weingarten 2) Not sure I understand your point about adding entropy. There is an ensemble of... – Dec 17, 4:59 PM
foreXiv by C. Jess Riedel is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.