Late, alas. Also: there have been a couple of complaints about the spam filter for comments on this blog, and I’m trying to track down the issue. The filter is supposed to tell you what’s wrong and help you successfully post the comment. If you’ve been unable to get past the filter, or if it’s just too much of a hassle even when you can get past it, please let me know so I can try to fix this.
Europe’s Galileo satellite navigation system recently went online, although without yet a complete constellation. In just a few years, there will be a full four independent navigations from great powers: the EU, the US (GPS), Russia (GLONASS), and China (BeiDou). Devices are already being built to use all four systems at once. Everyone wins through the increased redundancy and satellite count.
Design of the Solo cup.
I highly recommend this semi-technical talk on ARC fusion reactor design by Dennis Whyte.
(Video DownloadHelper allows downloading video off YouTube.)
Proposed in 2014 by Whyte and collaborators, ARC is a newer but only under-development alternative to traditional Tokamak-style reactor, where rare earth barium copper oxide (ReBCo) superconductors play a crucial role. Whyte argues that the key hold-up on fusion reactors is their absolute size, which necessitate large-scale, lumbering international collaboration. ReBCo superconductors are the key technical advance allowing smaller magnetic confinement. The parameters of these designs scale extremely well with increased magnetic field. Significant downsides include increased vessel pressure and pulsed operation because of intrinsic limitations on neutrons shielding.The fusion fuel is deuterium and tritium, which is most amenable choice of reactant on the fusion slope of the nuclei binding energy curve.
… [continue reading]
Bousso has a recent paper bounding the maximum information that can be sent by a signal from first principles in QFT:
I derive a universal upper bound on the capacity of any communication channel between two distant systems. The Holevo quantity, and hence the mutual information, is at most of order
the average energy of the signal, and
is the amount of time for which detectors operate. The bound does not depend on the size or mass of the emitting and receiving systems, nor on the nature of the signal. No restrictions on preparing and processing the signal are imposed. As an example, I consider the encoding of information in the transverse or angular position of a signal emitted and received by systems of arbitrarily large cross-section. In the limit of a large message space, quantum effects become important even if individual signals are classical, and the bound is upheld.
Here’s his first figure:
This all stems from vacuum entanglement, an oft-neglected aspect of QFT that Bousso doesn’t emphasize in the paper as the key ingredient.I thank Scott Aaronson for first pointing this out.a The gradient term in the Hamiltonian for QFTs means that the value of the field at two nearby locations is always entangled. In particular, the value of and ) are sometimes considered independent degrees of freedom but, for a state with bounded energy, they can’t actually take arbitrarily different values as becomes small, or else the gradient contribution to the Hamiltonian violates the energy bound. Technically this entanglement exists over arbitrary distances, but it is exponentially suppressed on scales larger than the Compton wavelength of the field.… [continue reading]
[This post was originally “Part 0”, but it’s been moved. Other parts in this series: 1,2,3,4,5,6,7.]
In an ideal world, the formalism that you use to describe a physical system is in a one-to-one correspondence with the physically distinct configurations of the system. But sometimes it can be useful to introduce additional descriptions, in which case it is very important to understand the unphysical over-counting (e.g., gauge freedom). A scalar potential is a very convenient way of representing the vector force field, , but any constant shift in the potential, , yields forces and dynamics that are indistinguishable, and hence the value of the potential on an absolute scale is unphysical.
One often hears that a quantum experiment measures an observable, but this is wrong, or very misleading, because it vastly over-counts the physically distinct sorts of measurements that are possible. It is much more precise to say that a given apparatus, with a given setting, simultaneously measures all observables with the same eigenvectors. More compactly, an apparatus measures an orthogonal basis – not an observable.We can also allow for the measured observable to be degenerate, in which case the apparatus simultaneously measures all observables with the same degenerate eigenspaces. To be abstract, you could say it measures a commuting subalgebra, with the nondegenerate case corresponding to the subalgebra having maximum dimensionality (i.e., the same number of dimensions as the Hilbert space). Commuting subalgebras with maximum dimension are in one-to-one correspondence with orthonormal bases, modulo multiplying the vectors by pure phases.a You can probably start to see this by just noting that there’s no actual, physical difference between measuring and ; the apparatus that would perform the two measurements are identical.… [continue reading]
The Perimeter Scholars International (PSI) program is now accepting applications for this Master’s program, to start next fall. The due date is Feb 1st. Me previously:
If you’re in your last year as an undergrad, I strongly advise you (seriously) to consider applying. Your choice of grad school is 80% of the selection power determining your thesis topic, and that topic places very strong constraints on your entire academic career. The more your choice is informed by actual physics knowledge (rather than the apparent impressiveness of professors and institutions), the better. An additional year at a new institution taking classes with new teachers can really help.
Here’s the poster and a brand new propaganda video:
… [continue reading]
SciRate is the best location I know of for public discussion and feedback on academic papers, and is an impressive open-source achievement by Adam Harrow and collaborators. Right now it has the most traction in the field of quantum informationQuantum info leading the way, as usual…a , but it could stand to become more popular, and to expand into other fields.
My colleague and good friend Dan Sank proposes a small but important tweak for SciRate: issue tracking, à la GitHub.
Issues in Scirate?
Scirate enables us to express comments/opinions on published works. Another very useful kind of feedback for research papers is issues. By “issue” I mean exactly the kind of thing I’m writing right now: a description of
a problem with the work which can be definitively fixed, or
a possible improvement to that product.
This differs from comments which are just statements of opinion which don’t require any reaction from the author. We all know that issues are essential in developing software, and based on a recent experience where I used github to host development of a research paper with three coauthors and more than a dozen group members providing feedback, I think that issues should also be used for research papers.
It might be nice to attach an issue tracker to Scirate, or at least have Scirate give links to an external issue tracker attached to each paper.
Why not just use a public github repo and get the issue tracker for free?
Making a github repo public makes everything public, including any sensitive information including comments about particular works/people. Having written a paper using github, I can imagine the authors would not want to make that repo public before going through the entire issue history making sure nobody said anything embarrassing/demeaning/etc.
… [continue reading]
I will start writing actual blog posts again soon, I promise. But until then, more nerdy space stuff…
ExoMars is approaching the Red Planet. The lander enters the atmosphere tomorrow.
The United States only operated continuous airborne alert — the maintenance of multiple nuclear-armed bomber aircraft continuously in flight to avoid the possibility of a sneak attack neutralizing the bomber force — during the ’60s, because the accident rate was too high. However, Operation Looking Glass kept at least one emergency command platform in the air around-the-clock for almost 30 years.
At DEFCON 2 or higher, the Looking Glass pilot and co-pilot were both required to wear an eye patch, retrieved from their Emergency War Order (EWO) kit. In the event of a surprise blinding flash from a nuclear detonation, the eye patch would prevent blindness in the covered eye, thus enabling them to see in at least one eye and continue flying. Later, the eye patch was replaced by goggles that would instantaneously turn opaque when exposed to a nuclear flash, then rapidly clear for normal vision.
They also continuously maintained airplanes flying over the ocean, dangling antenna into the water, to ensure constant communication with submarines. This stopped in 1991.
Very relatedly, former Secretary of Defense William Perry is teaching a MOOC about the continuing modern risk of nuclear weapons.
A history of the Project Orion. Abtract:
The race to the Moon dominated manned space flight during the 1960’s. and culminated in Project Apollo. which placed 12 humans on the Moon. Unbeknownst to the public at that time, several U.S. government agencies sponsored a project that could have conceivably placed 150 people on the Moon, and eventually sent crewed expeditions to Mars and the outer planets.
… [continue reading]
President Obama was directly asked in a Wired interview about the dangers Bostrom raises regarding AI. From the transcript:
DADICH: I want to center our conversation on artificial intelligence, which has gone from science fiction to a reality that’s changing our lives. When was the moment you knew that the age of real AI was upon us?
OBAMA: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs.
… [continue reading]
When talking to folks about the quantum measurement problem, and its potential partial resolution by solving the set selection problem, I’ve recently been deploying three nonstandard arguments. To a large extent, these are dialectic strategies rather than unique arguments per se. That is, they are notable for me mostly because they avoid getting bogged down in some common conceptual dispute, not necessarily because they demonstrate something that doesn’t formally follow from traditional arguments. At least two of these seem new to me, in the sense that I don’t remember anyone else using them, but I strongly suspect that I’ve just appropriated them from elsewhere and forgotten. Citations to prior art are highly appreciated.
Passive quantum mechanics
There are good reasons to believe that, at the most abstract level, the practice of science doesn’t require a notion of active experiment. Rather, a completely passive observer could still in principle derive all fundamental physical theories simply by sitting around and watching. Science, at this level, is about explaining as many observations as possible starting from as minimal assumptions as possible. Abstractly we frame science as a compression algorithm that tries to find the programs with the smallest Kolmogorov complexity that reproduces observed data.
Active experiments are of course useful for at least two important reasons: (1) They gather strong evidence for causality by feeding a source of randomness into a system to test a causal model, and (2) they produce sources of data that are directly correlated with systems of interest rather than relying on highly indirect (and perhaps computationally intractable) correlations. But ultimately these are practical considerations, and an inert but extraordinarily intelligent observer could in principle derive general relativity, quantum mechanics, and field theoryOf course, there may be RG-reasons to think that scales decouple, and that to a good approximation the large-scale dynamics are compatible with lots of possible small-scale dynamics.… [continue reading]
I’m in search of an authoritative reference giving a foundational/information-theoretic approach to classical measurement. What abstract physical properties are necessary and sufficient?
Motivation: The Copenhagen interpretation treats the measurement process as a fundamental primitive, and this persists in most uses of quantum mechanics outside of foundations. Of course, the modern view is that the measurement process is just another physical evolution, where the state of a macroscopic apparatus is conditioned on the state of a microscopic quantum system in some basis determined by their mutual interaction Hamiltonian. The apparent nonunitary aspects of the evolution inferred by the observer arises because the measured system is coupled to the observer himself; the global evolution of the system-apparatus-observer system is formally modeled as unitary (although the philosophical meaningfulness/ontology/reality of the components of the wavefunction corresponding to different measurement outcomes is disputed).
Eventually, we’d like to be able to identify all laboratory measurements as just an anthropocentric subset of wavefunction branching events. I am very interested in finding a mathematically precise criteria for branching.Note that the branches themselves may be only precisely defined in some large-N or thermodynamic limit.a Ideally, I would like to find a property that everyone agrees must apply, at the least, to laboratory measurement processes, and (with as little change as possible) use this to find all branches — not just ones that result from laboratory measurements.Right now I find the structure of spatially-redundant information in the many-body wavefunction to be a very promising approach.b
It seems sensible to begin with what is necessary for a classical measurement since these ought to be analyzable without all the philosophical baggage that plagues discussion of quantum measurement.… [continue reading]
[PSA: Happy 4th of July. Juno arrives at Jupiter tonight!]
This is short and worth reading:
The sharp distinction between Initial Conditions and Laws of Nature was initiated by Isaac Newton and I consider this to be one of his most important, if not the most important, accomplishment. Before Newton there was no sharp separation between the two concepts. Kepler, to whom we owe the three precise laws of planetary motion, tried to explain also the size of the planetary orbits, and their periods. After Newton's time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned. Of course, the first ones are quite arbitrary and their properties are hardly parts of physics while the recognition of the latter ones are the prime purpose of our science. Whether the sharp separation of the two will stay with us permanently is, of course, as uncertain as is all future development but this question will be further discussed later. Perhaps it should be mentioned here that the permanency of the validity of our deterministic laws of nature became questionable as a result of the realization, due initially to D. Zeh, that the states of macroscopic bodies are always under the influence of their environment; in our world they can not be kept separated from it.
This essay has no formal abstract; the above is the second paragraph, which I find to be profound. Here is the PDF. The essay shares the same name and much of the material with Wigner’s 1963 Nobel lecture [PDF].The Nobel lecture has a nice bit contrasting invariance principles with covariance principles, and dynamical invariance principles with geometrical invariance principles.… [continue reading]