Review of “Lifecycle Investing”

Summary

In this post I review the 2010 book “Lifecycle Investing” by Ian Ayres and Barry Nalebuff. (Amazon link here; no commission received.) They argue that a large subset of investors should adopt a (currently) unconventional strategy: One’s future retirement contributions should effectively be treated as bonds in one’s retirement portfolio that cannot be efficiently sold; therefore, early in life one should balance these low-volatility assets by gaining exposure to volatile high-return equities that will generically exceed 100% of one’s liquid retirement assets, necessitating some form of borrowing.

“Lifecycle Investing” was recommended to me by a friend who said the book “is extremely worth reading…like learning about index funds for the first time…Like worth paying 1% of your lifetime income to read if that was needed to get access to the ideas…potentially a lot more”. Ayres and Nalebuff lived up to this recommendation. Eventually, I expect the basic ideas, which are simple, to become so widespread and obvious that it will be hard to remember that it required an insight.

In part, what makes the main argument so compelling is that (as shown in the next section), it is closely related to an elegant explanation for something we all knew to be true — you should increase the bond-stock ratio of your portfolio as you get older — yet previously had bad justifications for. It also gives new actionable, non-obvious, and potentially very important advice (buy equities on margin when young) that is appropriately tempered by real-world frictions. And, most importantly, it means I personally feel less bad about already being nearly 100% in stocks when I picked up the book.

My main concerns, which are shared by other reviewers and which are only partially addressed by the authors, are:

  • Future income streams might be more like stocks than bonds for the large majority of people.
[continue reading]

How shocking are rare past events?

This post describes variations on a thought experiment involving the anthropic principle. The variations were developed through discussion with Andreas Albrecht, Charles Bennett, Leonid Levin, and Andrew Arrasmith at a conference at the Neils Bohr Institute in Copenhagen in October of 2019. I have not yet finished reading Bostrom’s “Anthropic Bias“, so I don’t know where it fits in to his framework. I expect it is subsumed into such existing discussion, and I would appreciate pointers.

The point is to consider a few thought experiments that share many of the same important features, but for which we have very different intuitions, and to identify if there are any substantive difference that can be used to justify these intuitions.

I will use the term “shocked” (in the sense of “I was shocked to see Bob levitate off the ground”) to refer to the situation where we have made observations that are extremely unlikely to be generated by our implicit background model of the world, such that good reasoners would likely reject the model and start entertaining previously disfavored alternative models like “we’re all brains in a vat”, the Matrix, etc. In particular, to be shocked is not supposed to be merely a description of human psychology, but rather is a normative claim about how good scientific reasoners should behave.

Here are the three scenarios:

Scenario 1: Through advances in geology, paleontology, theoretical biology, and quantum computer simulation of chemistry, we get very strong theoretical evidence that intelligent life appears with high likelihood following abiogenesis events, but that abiogenesis itself is very rare: there is one expected abiogenesis event per 1022 stars per Hubble time.
[continue reading]

Quotes from Curtright et al.’s history of quantum mechanics in phase space

Curtright et al. have a monograph on the phase-space formulation of quantum mechanics. I recommend reading their historical introduction.

A Concise Treatise on Quantum Mechanics in Phase Space
Thomas L. Curtright, David B. Fairlie, and Cosmas K. Zachos
Wigner’s quasi-probability distribution function in phase-space is a special (Weyl–Wigner) representation of the density matrix. It has been useful in describing transport in quantum optics, nuclear physics, quantum computing, decoherence, and chaos. It is also of importance in signal processing, and the mathematics of algebraic deformation. A remarkable aspect of its internal logic, pioneered by Groenewold and Moyal, has only emerged in the last quarter-century: It furnishes a third, alternative, formulation of quantum mechanics, independent of the conventional Hilbert space or path integral formulations. In this logically complete and self-standing formulation, one need not choose sides between coordinate or momentum space. It works in full phase-space, accommodating the uncertainty principle; and it offers unique insights into the classical limit of quantum theory: The variables (observables) in this formulation are c-number functions in phase space instead of operators, with the same interpretation as their classical counterparts, but are composed together in novel algebraic ways.

Here are some quotes. First, the phase-space formulation should be placed on equal footing with the Hilbert-space and path-integral formulations:

When Feynman first unlocked the secrets of the path integral formalism and presented them to the world, he was publicly rebuked: “It was obvious”, Bohr said, “that such trajectories violated the uncertainty principle”.

However, in this case, Bohr was wrong. Today path integrals are universally recognized and widely used as an alternative framework to describe quantum behavior, equivalent to although conceptually distinct from the usual Hilbert space framework, and therefore completely in accord with Heisenberg’s uncertainty principle…

Similarly, many physicists hold the conviction that classical-valued position and momentum variables should not be simultaneously employed in any meaningful formula expressing quantum behavior, simply because this would also seem to violate the uncertainty principle…However, they too are wrong.

[continue reading]

Ground-state cooling by Delic et al. and the potential for dark matter detection

The implacable Aspelmeyer group in Vienna announced a gnarly achievement in November (recently published):

Cooling of a levitated nanoparticle to the motional quantum ground state
Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, Markus Aspelmeyer
We report quantum ground state cooling of a levitated nanoparticle in a room temperature environment. Using coherent scattering into an optical cavity we cool the center of mass motion of a 143 nm diameter silica particle by more than 7 orders of magnitude to n_x = 0.43 \pm 0.03 phonons along the cavity axis, corresponding to a temperature of 12 μK. We infer a heating rate of \Gamma_x/2\pi = 21\pm 3 kHz, which results in a coherence time of 7.6 μs – or 15 coherent oscillations – while the particle is optically trapped at a pressure of 10^{-6} mbar. The inferred optomechanical coupling rate of g_x/2\pi = 71 kHz places the system well into the regime of strong cooperativity (C \approx 5). We expect that a combination of ultra-high vacuum with free-fall dynamics will allow to further expand the spatio-temporal coherence of such nanoparticles by several orders of magnitude, thereby opening up new opportunities for macroscopic quantum experiments.
[EDIT: The same group has more recently achieved ground-state cooling with real-time control feedback.]

Ground-state cooling of nanoparticles in laser traps is a very important milestone on the way to producing large spatial superpositions of matter, and I have a long-standing obsession with the possibility of using such superpositions to probe for the existence of new particles and forces like dark matter. In this post, I put this milestone in a bit of context and then and then toss up a speculative plot for the estimated dark-matter sensitivity of a follow-up to Delić et al.’s device.

One way to organize the quantum states of a single continuous degree of freedom, like the center-of-mass position of a nanoparticle, is by their sensitivity to displacements in phase space.… [continue reading]

The interpretation of free energy as bit-erasure capacity

Our paper discussed in the previous blog post might prompt this question: Is there still a way to use Landauer’s principle to convert the free energy of a system to its bit erasure capacity? The answer is “yes”, which we can demonstrate with a simple argument.


Summary: The correct measure of bit-erasure capacity N for an isolated system is the negentropy, the difference between the system’s current entropy and the entropy it would have if allowed to thermalize with its current internal energy. The correct measure of erasure capacity for a constant-volume system with free access to a bath at constant temperature T is the Helmholtz free energy A (divided by kT, per Landauer’s principle), provided that the additive constant of the free energy is set such that the free energy vanishes when the system thermalizes to temperature T. That is,

    \[N = \frac{A}{kT} = \frac{U-U_0}{kT} - (S - S_0),\]

where U_0 and S_0 are the internal energy and entropy of the system if it were at temperature T. The system’s negentropy lower bounds this capacity, and this bound is saturated when U = U_0.


Traditionally, the Helmholtz free energy of a system is defined as \tilde{A} = U - kTS, where U and S are the internal energy and entropy of the system and T is the constant temperature of an external infinite bath with which the system can exchange energy.Here, there is a factor of Boltzmann’s constant k in front of TS because I am measuring the (absolute) entropy S in dimensionless bits rather than in units of energy per temperature. That way we can write things like N = S_0 - S.a   (I will suppress the “Helmholtz” modifier henceforth; when the system’s pressure rather than volume is constant, my conclusion below holds for the Gibbs free energy if the obvious modifications are made.)… [continue reading]

On computational aestivation

People often say to me “Jess, all this work you do on the foundations of quantum mechanics is fine as far as it goes, but it’s so conventional and safe. When are you finally going to do something unusual and take some career risks?” I’m now pleased to say I have a topic to bring up in such situations: the thermodynamic incentives of powerful civilizations in the far future who seek to perform massive computations. Anders Sandberg, Stuart Armstrong, and Milan M. Ćirković previously argued for a surprising connection between Landauer’s principle and the Fermi paradox, which Charles Bennett, Robin Hanson, and I have now critiqued. Our comment appeared today in the new issue of Foundations of Physics:

Comment on 'The aestivation hypothesis for resolving Fermi's paradox'
Charles H. Bennett, Robin Hanson, C. Jess Riedel
In their article [arXiv:1705.03394], 'That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox', Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer's principle implies that a civilization can in principle perform far more (~1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cosmic background temperature is very low. So perhaps aliens are out there, but quietly waiting. Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error.
[continue reading]

FAQ about experimental quantum Darwinism

I am briefly stirring from my blog-hibernationThis blog will resume at full force sometime in the future, but not just yet.a   to present a collection of frequently asked questions about experiments seeking to investigate quantum Darwinism (QD). Most of the questions were asked by (or evolved from questions asked by) Phillip Ball while we corresponded regarding his recent article “Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests” for Quanta magazine, which I recommend you check out.


Who is trying see quantum Darwinism in experiments?

I am aware of two papers out of a group from Arizona State in 2010 (here and here) and three papers from separate groups last year (arXiv: 1803.01913, 1808.07388, 1809.10456). I haven’t looked at them all carefully so I can’t vouch for them, but I think the more recent papers would be the closest thing to a “test” of QD.

What are the experiments doing to put QD the test?

These teams construct a kind of “synthetic environment” from just a few qubits, and then interrogate them to discover the information that they contain about the quantum system to which they are coupled.

What do you think of experimental tests of QD in general?

Considered as a strictly mathematical phenomenon, QD is the dynamical creation of certain kinds of correlations between certain systems and their environments under certain conditions. These experiments directly confirm that, if such conditions are created, the expected correlations are obtained.

The experiments are, unfortunately, not likely to offer many insight or opportunities for surprise; the result can be predicted with very high confidence long in advance.… [continue reading]

Links for July 2018

  • The hyoid bone is unique in the human skeleton for being free-floating; it does not share a joint with any other bones, and is only distantly connected to the skull through the Stylohyoid ligament. It is mostly held in place by muscle and cartilage, and helps control the tongue and larynx. Unlike a human’s clavicle, a cat’s clavicle is similarly free-floating, allowing a cat’s shoulders to squeeze through openings as narrow as their skull.
  • Mars Pathfinder

    …was the first of a series of missions to Mars that included rovers, and was the first successful lander since the two Vikings landed on the red planet in 1976…In addition to scientific objectives, the Mars Pathfinder mission was also a “proof-of-concept” for various technologies, such as airbag-mediated touchdown and automated obstacle avoidance, both later exploited by the Mars Exploration Rover mission. The Mars Pathfinder was also remarkable for its extremely low cost relative to other robotic space missions to Mars.

    Here’s Cindy Healy talking about UNIX administration for Pathfinder.

    (H/t Dan Fincke.)

  • Good write-up about the boys rescued from the cave in Thailand.
  • 18-year-old Ewin Tang has proved that the Kerenidis and Prakash recommendation algorithm does not provide an example of an exponential speed up in quantum machine learning. Here’s his advisor Scott Aaronson on the implications:

    Prior to Ewin’s result, the KP algorithm was arguably the strongest candidate there was for an exponential quantum speedup for a real-world machine learning problem. The new result thus, I think, significantly changes the landscape for quantum machine learning; note that whether KP gives a real exponential speedup was one of the main open problems mentioned in John Preskill’s survey on the applications of near-term quantum computers

    More Fuel For The QML Skeptic Game.

[continue reading]