- Non-Markovianity hinders Quantum Darwinism
Fernando Galve, Roberta Zambrini, and Sabrina ManiscalcoWe investigate Quantum Darwinism and the emergence of a classical world from the quantum one in connection with the spectral properties of the environment. We use a microscopic model of quantum environment in which, by changing a simple system parameter, we can modify the information back flow from environment into the system, and therefore its non-Markovian character. We show that the presence of memory effects hinders the emergence of classical objective reality, linking these two apparently unrelated concepts via a unique dynamical feature related to decoherence factors.Galve and collaborators recognize that the recent Nat. Comm. by Brandao et al is not as universal as it is sometimes interpretted, because the records that are proved to exist can be trivial (no info). So Galve et al. correctly emphasize that Darwinism is dependent on the particular dynamics found in our universe, and the effectiveness of record production is in principle an open question.
Their main model is a harmonic oscillator in an oscillator bath (with bilinear spatial couplings, as usual) and with a spectral density that is concentrated as a hump in some finite window. (See black line with grey shading in Fig 3.) They then vary the system’s frequency with respect to this window. Outside the window, the system and environment decouple and nothing happens. Inside the window, there is good productions of records and Darwinism. At the edges of the window, there is non-Markovianity as information about the system leaks into the environment but then flows back into the system from time to time. They measure non-Markovianity as the time when the fidelity between the system’s state at two different times is going up (rather than down monotonically, as it must for completely positive dynamics).
- Although this little paper has several non-sequitors suggesting it’s been assembled like Frankenstein (An Introduction to Quantum Error Correction
Daniel GottesmanQuantum states are very delicate, so it is likely some sort of quantum error correction will be necessary to build reliable quantum computers. The theory of quantum error-correcting codes has some close ties to and some striking differences from the theory of classical error-correcting codes. Many quantum codes can be described in terms of the stabilizer of the codewords. The stabilizer is a finite Abelian group, and allows a straightforward characterization of the error-correcting properties of the code. The stabilizer formalism for quantum codes also illustrates the relationships to classical coding theory, particularly classical codes over GF(4), the finite field with four elements.is the Z Pauli operator/error for the
th qubit; don’t worry the first time he mentions Shor’s code without explaining it; etc.), it’s actually a very nice little introduction. Gottesman introduces several key ideas very quickly and logically. Good for beginners like me. See also “Operator quantum error correction” (arXiv:quant-ph/0504189) by Kribs et a.
- (H/t Martin Ganahl.)The density-matrix renormalization group in the age of matrix product states
Ulrich SchollwoeckThe density-matrix renormalization group method (DMRG) has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. In the further devel- opment of the method, the realization that DMRG operates on a highly interesting class of quantum states, so-called matrix prod- uct states (MPS), has allowed a much deeper understanding of the inner structure of the DMRG method, its further potential and its limitations. In this paper, I want to give a detailed exposi- tion of current DMRG thinking in the MPS language in order to make the advisable implementation of the family of DMRG algo- rithms in exclusively MPS terms transparent. I then move on to dis- cuss some directions of potentially fruitful further algorithmic development: while DMRG is a very mature method by now, I still see potential for further improvements, as exemplified by a num- ber of recently introduced algorithms.Another excellent introduction, this time to matrix product states and the density-matrix renormalization group, albeit as part of a much larger review.
- (H/t Sean Carroll. Lecture available on YouTube.) Some off-the-cuff thoughts:The Logic of the Past Hypothesis
David WallaceI attempt to get as clear as possible on the chain of reasoning by which irreversible macrodynamics is derivable from time-reversible microphysics, and in particular to clarify just what kinds of assumptions about the initial state of the universe, and about the nature of the microdynamics, are needed in these derivations. I conclude that while a 'Past Hypothesis' about the early Universe does seem necessary to carry out such derivations, that Hypothesis is not correctly understood as a constraint on the early Universe's entropy.- This paper is interesting because it casts Jayne’s method as imposing the “simple” initial condition (i.e. low algorithmic entropy of description, I guess?) at an arbitrary time at the beginning of when we start analyzing a system.
Under this interpretation of Jayne’s work, the max-entropy principle is predicting anti-thermodynamic behavior (i.e. a reversed 2nd law) for all systems before we start analyzing them. Mostly a result of Wallace’s realist stance.
- Wallace claims there is a difference between the traditional “Low Entropy Past” hypothesis of Boltzmann and everyone else, and his personal “Simple Past” hypothesis. This is the crux, but I don’t understand yet how these are different.
- Some key question: what would the world look like if the simple initial condition were at a time only modestly in the past rather than at the very beginning?
What would our records look like? Could we look at evidence and determine the exact time? What if there were multiple times for multiple systems? What would that look like?
- In the video, Wallace says it’s wrong to attribute our choice of coarse-graining to those variables that we find “interesting”. (I might find the value of the stock market interesting, but that doesn’t mean I can integrate everything else out and find dynamics questions for it.) Rather, he says, it’s important that the choice of coarse-grained variables don’t throw away key information that allows one to get evolution laws. (I.e. local gas densities might have predictive laws, but a random coarse-graining function on 3N positions and velocities will not).
But this isn’t a complete story since we obviously do throw away some information (e.g., the higher level laws are often stochastic) and it’s not clear that there aren’t alternative variables we could choose that do have predictive laws but are unrelated to our experience or anything we care about.
- He says it’s wrong to summarize the past hypothesis as “the early universe was in a low entropy state” because that’s a statement about it being in a certain macrostate associated with low entropy. Rather, the past hypothesis is about assuming the early universe isn’t very carefully correlated at the microlevel to conspire today to cause statistical mechanics to break (in the same way that if you took an irreversible process and looked at it’s time-inverse, it would look like there were a bunch of carefully fine-tuned conspiracies).
But is that true? Hypothesizing that the early universe had a region high temperature separate from one at low temperature (which is low entropy) is the sort of thing we need to get irreversible laws, right? If we were to hypothesize merely that the state of early universe didn’t have careful fine-tuned conspiracies, but was otherwise high-entropy, then we would already be in heat death now. It’s true that the no-conspiracy hypothesis would guarantee we’d persist in thermal equilibrium, rather than miraculously evolving into a low-entropy state, but that’s not enough to explain the world around us. (So maybe he’s not trying to explain the world around us, and just trying to explain the time-irreversibility of macroscopic laws? I don’t think so, because if we were and continued to be in thermal equilibrium, then the macroscopic laws would be time-reversible!)
- Likewise, he says “the fact that our macro laws are entropy increasing means it’s not very difficult to infer from the historical record that the past had lower entropy”. No way! The macro laws are entropy non-decreasing, but (without a past hypothesis) the most likely state which agrees with a mid-entropy present-day observation is a conspiratorial high-entropy state in the past.
- This paper is interesting because it casts Jayne’s method as imposing the “simple” initial condition (i.e. low algorithmic entropy of description, I guess?) at an arbitrary time at the beginning of when we start analyzing a system.
Abstracts for February 2016
Bookmark the permalink.