- Why does a processor need billions of transistors if it’s only ever executing a few dozen instructions per clock cycle?
- Nuclear submarines as refuges from global catastrophes.
…corporate transactions such as mergers and acquisitions or financings are characterized by several salient facts that lack a complete theoretical account. First, they are almost universally negotiated through agents. Transactional lawyers do not simply translate the parties’ bargain into legally enforceable language; rather, they are actively involved in proposing and bargaining over the transaction terms. Second, they are negotiated in stages, often with the price terms set first by the parties, followed by negotiations primarily among lawyers over the remaining non-price terms. Third, while the transaction terms tend to be tailored to the individual parties, in negotiations the parties frequently resort to claims that specific terms are (or are not) “market.” Fourth, the legal advisory market for such transactions is highly concentrated, with a half-dozen firms holding a majority of the market share.
[Our] claim is that, for complex transactions experiencing either sustained innovation in terms or rapidly changing market conditions, (1) the parties will maximize their expected surplus by investing in market information about transaction terms, even under relatively competitive conditions, and (2) such market information can effectively be purchased by hiring law firms that hold a significant market share for a particular type of transaction.
…The considerable complexity of corporate transaction terms creates an information problem: One or both parties may simply be unaware of the complete set of surplus-increasing terms for the transaction, and of their respective outside options should negotiations break down. This problem is distinct from the classic problem of valuation uncertainty. Rather than unawareness of facts that may affect the value of the capital asset to be transferred between the parties, the problem identified here is unawareness of the possibilities for contracting with respect to that asset.
The non-price terms of transactional agreements and their associated payoffs may change rapidly as a result of contractual innovation and market conditions, such that parties without current market information may have difficulty determining their expected surplus from transacting. This is particularly so for corporate transactions involving private companies or private securities offerings, because the transaction terms will remain private for at least some period of time
- Fermat’s Library for this week is “Programming Considered as a Human Activity” by Edsger W. Dijkstra (1965), with nice comments by João Batalha.
- The original Caesar salad was invented in Tijuana, Mexico by Caesar Cardini, an Italian immigrant and restaurateur.
Operation Paul Bunyan and the surrounding events in the Korean Demilitarized Zone were bizarre. Here is a weird tour of the Joint Security Area in the DMZ, the only location where the two sides meet face-to-face.
- Schrödinger’s rat returns.
- Alan Kay answers “What made Xerox PARC special? Who else today is like them?”
Vipul Naik posted some links (1,2,3) that criticize StackOverflow for being unwelcoming to newcomers. Jonah Bishop give a representative anecdote:
In an effort to keep the community as clean and orderly as possible, new users have very little rights from the get-go. On paper, this is a pretty nice idea. In practice, it makes it difficult for new users to gain any traction. I read through a number of questions today and had several comments for the original poster. Unfortunately, I couldn’t make my comments, since new users cannot post comments on articles they themselves didn’t write (you have to gain “reputation” in order to gain that privilege). Posting my comment as an “answer” to the original question seemed like bad form, so I didn’t do that. Looking elsewhere around the site, I found a few questions I felt I could answer. As soon as I went to answer said questions, someone else (in some cases, a number of other people) had jumped in and beaten me to the punch. I never had a chance to provide a helpful answer. Not only do you have to be very knowledgeable about a subject, you’ve also got to be very fast in providing said answer. I eventually did provide an answer for a question, then realized that my approach wouldn’t work. Before I could take action and modify the answer, my submission had already been modded down by several people, several of whom left snarky remarks. What a warm welcome for a new user! I subsequently deleted my answer.
Here are my (reposted) thoughts: These issues seem to mirror complaints about Wikipedia over the past ~5 years. Certainly, I have experienced aspects of this phenomenon at both sites. In both cases, many people claim this represents a decline in quality of the site, perhaps due to incumbent users entrenching themselves with wiki-lawyering (and SO’s equivalent). These explanations used to appeal to me too.
However, my new tentative interpretation is that this doesn’t represent a decline in these sites so much as them asymptoting to their maximum quality given their rules and site structure. The key evidence is that, in both cases, the apparent decline in quality has not caused these sites to become less useful to *read*. My understanding is that their traffic continues to grow and there are no serious competitors. In other words: it becomes harder and harder to contribute to these sites *given their rules*, frustrating users, but their quality is either still slowly improving or stagnant.
(I’ve seen isolated examples where quality of a Wikipedia page as declined, but I think these are exceptions to the rule. Evidence that a decline is systematic would falsify my interpretation.)
Now, this is only a valid argument insofar as you assume the mission of the site is to produce good content for others rather than help the users. This seems a safe assumption for Wikipedia. It’s easy for SO users to think that the site is about answering their questions, but the founders make it pretty clear that this takes a back seat to producing useful content. Helping users is an instrumental goal, acting to incentivize contribution, for the primary goal of creating content. SO has become less helpful to users mostly because it has answered the questions that are best suited for the structure of the site, and now a large fraction of questions being asked are marginal questions, e.g., ones that are more subjective or localized and not a good fit for SO’s format.
This is *not* to say that these sites couldn’t improve with new structure/mechanisms/rules, and I think it’s completely true that people’s complaints are pointing exactly toward the issues that new mechanisms could help with. But I conjecture this mostly requires new insight about how to design collaborative websites, and new technical tools. For instance, I am an inclusionists, and I think Wikipedia should move in that direction. But deletionism exists for a very real reason: it’s as a crude method of avoiding the workload required to police a long tail of non-notable but verifiable content which has a high surface area for abuse. The way to fix this, in my opinion, is to create new tools for monitoring and verifying content at scale, not spending more time arguing against deletionism.
- Fantastic article by Dylan Matthews on kidney donation.
Before modern military drones:
Aphrodite and Anvil were the World War II code names of United States Army Air Forces and United States Navy operations to use B-17 and PB4Y bombers as precision-guided munitions against bunkers and other hardened/reinforced enemy facilities, such as those targeted during Operation Crossbow….
Old Boeing B-17 Flying Fortress bombers were stripped of all normal combat armament and all other non-essential gear (armor, guns, bomb racks, transceiver, seats, etc.), relieving about 12,000 lb (5,400 kg) of weight. To allow easier exit when the pilot and co-pilot were to parachute out, the canopy was removed. Azon radio remote-control equipment was added, with two television cameras fitted in the cockpit to allow a view of both the ground and the main instrumentation panel to be transmitted back to an accompanying CQ-17 ‘mothership’.
The drone was loaded with explosives weighing more than twice that of a B-17’s normal bomb payload. The British Torpex used for the purpose was itself 50% more powerful than TNT.
A relatively remote location in Norfolk, RAF Fersfield, was the launch site. Initially, RAF Woodbridge had been selected for its long runway, but the possibility of a damaged aircraft that diverted to Woodbridge for landings colliding with a loaded drone caused concerns. The remote control system was insufficient for safe takeoff, so each drone was taken aloft by a volunteer pilot and a volunteer flight engineer to an altitude of 2,000 ft (600 m) for transfer of control to the CQ-17 operators. After successful turnover of control of the drone, the two-man crew would arm the payload and parachute out of the cockpit. The ‘mothership’ would then direct the missile to the target….
Of 14 missions flown, none resulted in the successful destruction of a target. Many aircraft lost control and crashed or were shot down by flak, and many pilots were killed.
- Casey ex Australia on the poor economics of mining water on the moon.
Why do active managers underperform index funds? The standard story for this is generally based on some weak version of the efficient market hypothesis (along with the tacit assumption that the active managers don’t have special knowledge or more powerful brains not available to others) plus the cost of fees. However, apparently managers do even worse compared to the index than fees can account for. What causes this? This short paper (described/advertised poorly in Bloomberg) points to a dead simple mechanism, although I’m going to reframe and tweak it since I think they don’t present it correctly.
The basic idea is that in order to pick stocks, you must necessarily reduce your diversification. Even if you pick stocks completely randomly, so that your expected return is equal to the index, a risk adverse investor is worse off due to larger variance. Since there are many ways to interconvert between additional expected return and risk (e.g., insurance), inducing an exchange rate between the two, there is a quantifiable cost for active management even when random. (Indeed, I expect managers to take some steps to effectively buy insurance to reduce their variance.) In other words: the cost of active management does not go to zero in the limit where the manager picks randomly and fees are ignored. Rather, as soon as you start picking you are accruing a cost through reduced diversification; you cannot come out ahead unless the size of your edge compensates for this non-zero cost.
(Ben Hoskin pointed out to me that this edge could be from having small insight into the expected returns of stocks or their risk and/or correlation with other stocks. Skilled managers could pick a small portfolio of stocks with equal returns to the index but reduced risk.)
The authors emphasize the skewness of the distribution of performance over individual stocks, but I suspect the more precise cause is the fact that index returns are dominated by a small number of outperforming stocks even as the number of stocks in the index grows large. (If I knew more about this I would probably say “power law” or “black swans” or something.) The reason this is important is that, were the opposite true, active managers could still approach the volatility of the index by simply having a large portfolio on an absolute scale (); in contrast, for outlier-dominated indices, you must have an order-unity fraction of the index, i.e., a large portfolio on a relative scale. Now, such a distribution of stock performance implies skewness simply because there is a zero-lower-bound on performance, but there are skewed distributions where random active managers can do fine. I think my claim is different from the authors insofar as mine requires that the variance of stock performance is unbounded, whereas the authors assume an ideal lognormal distribution (which has finite variance) but obtain the effect simply because their simulation has a finite number of stocks (). The number of hypothetical stocks necessary to demonstrate the difference might exceed the actual number of real-world stocks, though, so maybe the distinction is moot.
- AI Impacts release a guide to results on AI timeline predictions.
This looks pretty spectacular:
Two parts of the brain are heavily involved in remembering our personal experiences. The hippocampus is the place for short-term memories while the cortex is home to long-term memories. This idea became famous after the case of Henry Molaison [“HM”] in the 1950s. His hippocampus was damaged during epilepsy surgery and he was no longer able to make new memories, but his ones from before the operation were still there. So the prevailing idea was that memories are formed in the hippocampus and then moved to the cortex where they are “banked”. The team at the Riken-MIT Center for Neural Circuit Genetics have done something mind-bogglingly advanced to show this is not the case. The experiments had to be performed on mice, but are thought to apply to human brains too.
They involved watching specific memories form as a cluster of connected brain cells in reaction to a shock. Researchers then used light beamed into the brain to control the activity of individual neurons – they could literally switch memories on or off. The results, published in the journal Science, showed that memories were formed simultaneously in the hippocampus and the cortex.
- This XKCD comic is rather deep. People, including professors in the sciences, are generally terrible at identifying what counts as an explanation.
I didn’t know that chairlift towers are brought in by helicopter:
Morningstar and Vanguard try to quantify the added effective return of financial advisors for unsophisticated investors, which they call “gamma” and “advisor’s alpha” respectively. They both get a number in the neighborhood of 1.5% for straightforward technical maneuvers, with Vanguard adding that much again to account for “behavioral coaching” aka soothing the client’s emotional needs and convincing them to stay the course in a bear market. (They emphasize that these annualized, emotion-based returns will not actually accumulate smoothly year-to-year but will arrive lumpy.) These of course are to be compared to the typical prices of 0.8% to 1.0% of all assets under management that advisors typically charge. Both of those documents are also interesting for giving you a feel of the sort of propaganda-ish white papers directed at the advisors themselves (not clients). For instance:
Most investors in search of an advisor are looking for someone they can trust. Yet, trust can be fragile. Typically, trust is established as part of the “courting” process, in which your clients are getting to know you and you are getting to know them. Once the relationship has been established, and the investment policy has been implemented, we believe the key to asset retention is keeping that trust.
So how best can you keep the trust? First and foremost, clients want to be treated as people, not just as portfolios. This is why beginning the client relationship with a financial plan is so essential. Yes, a financial plan promotes more complete disclosure about clients’ investments, but more important, it provides a perfect way for clients to share with the advisor what is of most concern to them: their goals, feelings about risk, their family, and charitable interests. All of these topics are emotionally based, and a client’s willingness to share this information is crucial in building trust and deepening the relationship.
That baby born with donor mitochondria appears to be healthy and hasn’t displayed symptoms of Leigh syndrome (the mitochondrial disorder). This was interesting:
A key problem, however, is that not all of the defective mitochondria can be eliminated. The boy, Zhang reports in the new paper, currently carries between 2.36 and 9.23 percent of potentially defective DNA, according to sampling of his urine, hair follicles and circumcised foreskin.
“That’s not surprising,” says Doug Wallace, head of the Center for Mitochondrial and Epigenomic Medicine at the Children’s Hospital of Philadelphia, who was not involved in the study. “As far as I know, very few cases have been found where there is absolutely no carryover of mitochondria from the donor nucleus.”
State of the art in night vision:
More here and here.
This defensive essay by William Thurston is simultaneously compelling and frustrating. On lore in academic fields:
Within any field, there are certain theorems and certain techniques that are generally known and generally accepted. When you write a paper, you refer to these without proof. You look at other papers in the field, and you see what facts they quote without proof, and what they cite in their bibliography. You learn from other people some idea of the proofs. Then you’re free to quote the same theorem and cite the same citations. You don’t necessarily have to read the full papers or books that are in your bibliography. Many of the things that are generally known are things for which there may be no known written source. As long as people in the field are comfortable that the idea works, it doesn’t need to have a formal written source
Why one might produce cryptic proofs:
I’d like to spell out more what I mean when I say I proved this theorem. It meant that I had a clear and complete flow of ideas, including details, that withstood a great deal of scrutiny by myself and by others. Mathematicians have many different styles of thought. My style is not one of making broad sweeping but careless generalities, which are merely hints or inspirations: I make clear mental models, and I think things through. My proofs have turned out to be quite reliable. I have not had trouble backing up claims or producing details for things I have proven. I am good in detecting flaws in my own reasoning as well as in the reasoning of others.
However, there is sometimes a huge expansion factor in translating from the encoding in my own thinking to something that can be conveyed to someone else. My mathematical education was rather independent and idiosyncratic, where for a number of years I learned things on my own, developing personal mental models for how to think about mathematics. This has often been a big advantage for me in thinking about mathematics, because it’s easy to pick up later the standard mental models shared by groups of mathematicians. This means that some concepts that I use freely and naturally in my personal thinking are foreign to most mathematicians I talk to. My personal mental models and structures are similar in character to the kinds of models groups of mathematicians share—but they are often different models.
The social phenomenon of trust in mathematics is weird, and this description fits with how I have heard it described by others:
Mathematicians were actually very quick to accept my proof, and to start quoting it and using it based on what documentation there was, based on their experience and belief in me, and based on acceptance by opinions of experts with whom I spent a lot of time communicating the proof. The theorem now is documented, through published sources authored by me and by others, so most people feel secure in quoting it; people in the field certainly have not challenged me about its validity, or expressed to me a need for details that are not available.
Thurston seems to accept all this even though he clearly understands that the point of academics is to put knowledge into human minds, not write it down on paper. I mostly just feel that all of this is highly suboptimal, and can only hope that new technical tools will improve the situation.
The U.S. Food and Drug Administration granted 23andMe authorization to offer ten genetic health risk reports including late-onset Alzheimer’s disease, Parkinson’s disease, celiac disease, and a condition associated with harmful blood clots.
This paper [PDF] has an overly grandiose title and presents a theory for the evolutionary origins of moral condemnation that I don’t find at all convincing, except maybe as a third-order effect. (Moral actions are a stupendously complicated and are unreliable as Schelling points.) But the paper has a great concise review of how various philosophical views of morality fit in a biological picture, and the outstanding mysteries, e.g., why impartiality? Here is one particularly interesting bit:
…the altruism-heuristic model predicts that increasing people’s altruistic dispositions toward other people will lead to greater use of action constraints such as “do not kill,” but instead the reverse occurs. Kurzban, DeScioli, and Fein (2012) found that participants reported greater willingness to kill one brother to save five brothers than to kill one stranger to save five strangers. Altruism causes people to be less likely, not more likely, to use Kantian action constraints.
(H/t Giego Caleiro.)
Cassini grand finale:
(H/t Paul Blackburn.)
- If you correct for differences in fatal injuries, the US has the highest life expectancy in the OECD. H/t Tyler Cowen.
Two things from Alex Tabarrok: Chinese establishes Special Economic Zone for medical tourism. And this:
Julian Simon helped revolutionize the airline industry by popularizing the idea that carriers should stop randomly removing passengers from overbooked flights and instead auction off the right to be bumped by offering vouchers that go up in value until all the necessary seats have been reassigned. Simon came up with the idea for these auctions in the 1960s, but he wasn’t able to get regulators interested in allowing it until the 1970s. Up until that time, Litan writes, “airlines deliberately did not fill their planes and thus flew with less capacity than they do now, a circumstance that made customers more comfortable, but reduced profits for airlines.” And this, of course, meant they had to charge passengers more to compensate.
By auctioning off overbooked seats, economist James Heins estimates that $100 billion has been saved by the airline industry and its customers in the 30-plus years since the practice was introduced.
To me, the take home message is that human institutions really can have huge flaws that can be demonstrated with simple arguments, yet persist for decades. There’s more work to be done.
At first I was confused as to why NASA is planning on building a “gateway” space station in cislunar orbit even though its self-stated primary goal for human spaceflight is a Mars mission. We don’t build space stations for fun, we build them because they have a purpose. And if you’re going to Mars, why not launch directly there instead of bothering with the moon? But this article explains some of the background. The somewhat unintuitive insight is that (a) getting from low-Earth orbit (LEO) to cislunar orbits takes at least 80% of the delta-V (roughly, the fuel budget) as going from LEP to Mars, and, furthermore, that (b) you can use lunar gravity assists to raise and lower your moon-crossing orbit. (Here is a useful delta-V map of the solar system, but it doesn’t include a cislunar stop on the way to Mars!I thought this XKCD depiction would be more helpful, but it suggests the opposite conclusion. I’m not sure how to reconcile all this. a )
The key trick is that you can get from LEO to the cislunar orbits through electrical propulsion which is much slower but more efficient than normal chemical rockets. This isn’t a good way to move astronauts (since it’s so slow), but it’s a great way to economically get lots of mass out of Earth’s gravity well, e.g., spacious Mars transportation with heavy radiation shielding, return fuel, Mars habitats, etc. So spend a few years slowing raising this stuff, then send out the actual human in a small pod (with fast chemical rockets) to pick up the goodies and head off for Mars.
Besides requiring less fuel than a Mars-landing mission, a Mars-orbital/Phobos-landing mission has other perks:
Pat Troutman put out the idea of using Stickney crater on Phobos as a low radiation environment. With Phobos behind, Mars above, and crater walls all around it it’s about as sheltered as can be for a surface. (Even Kim Stanley Robinson didn’t think of that in Red Mars.) That benign environment allows long stays, up to 900 days. Astronauts stationed in Stickney could teleoperate rovers on the surface without noticeable latency.”
Also, here is a rad scatter plot of delta-V vs. trip duration comparing lunar, asteroid, and Mars missions.
- The Wikipedia page on the Quaternary extinction event has pretty pictures of many of the fantastic megafauna that died out during the transition from the Pleistocene to Holocene epoch, roughly 10k years ago. Many, like the Woolly rhinoceros and the Irish Elk, are depicted in cave paintings.
- According to SpaceX COO, cost of refurbishing F9 first stage was “substantially less” than half of a new stage; will be even less in the future. Mostly unrelated: here is a summary diagram of all their flights, with discussion.
- Males and females of this unusual insect species have (geometrically) reversed genitals.
Alan Hájek has an essay documenting “Philosophical Heuristics“, i.e., general catefories for attacking thinking problems that are often useful. (H/t Amanda Askell.) Many have analogs in math, and he considers it a work in progress. The categories he assembles are
- Checking extreme and near-extreme cases
- Reflexivity and self-reference — “Death by diagonalization”
- Self-undermining views
- Transform to nearby contexts/modalities — Interchange space with time, rationality with morality, chance with counterfactuals
- Argue for something’s possibility — Use conceivability, arbitrariness, continuity with interpolation or extrapolation, symmetry
- Trial and error — Break possibilities into enumerable subcomponents and check the combinations
If you have any experience reading philosophy papers, or are around thoughtful people, it’s easy to think some of these are obvious or too generic to write down. I think that’s a mistake, though; I can reach back and vaguely recall finding the initial uses of many of these heuristics revelatory and effective. And it’s a common mistake in academics to confuse novelty with depth/importance. I expect this to be most useful to students who are just being exposed to these, since it will help them keep their eyes open for their re-appearance.
My principle critique of this piece is that adding mathematical and scientific examples would have been more valuable than just philosophical ones. In the latter case we might often worry that some sleight of hand is being employed. (Have I really accomplished anything when I note that the statement “Unverifiable sentences are meaningless” cannot be verified?) Math and science provide more objective examples when “real explanatory work” has been done.
Scott Alexander on this partial rebuttal of Gregory Clark:
A summary of the arguments for why multigenerational mobility is not as low as Clark thinks. I may be misunderstanding this field, but it seems to me that the randomized lottery-style experiments show there’s not much long-term transmission of wealth through non-genetic means (which makes sense since only one person can get an inheritance). But transmission of wealth through genetic means is heavily dependent on assortative mating, since three generations out your descendants only have an eighth of your genes anyway. I wonder if anyone has looked into whether the places that have been found to have unusually low intergenerational mobility (medieval Venice?) are the ones that have the most assortative mating.
I have long sought a method for browsing a blog’s archives, and especially for creating a historical RSS feed that would allow me to read through them at a reasonable pace. Unfortunately, there seems to be no good way to do this since Google Reader shut down many years ago. Jeff Kaufman has advocated for blog authors producing their own historical archive feed, but this hasn’t seen widespread adoption, unsurprisingly.
However, I am happy to note that someone launched PubCenter last year, which aims to build a searchable archive of RSS feeds since its date of inceptions (July 2016). The original, raw Google Reader archive data has apparently been saved, so these sources could in principle be combined, but I think the Google reader data is an unusable mess right now. There are also archives of select blogs going back to circa 2013 at the Old Reader, and you can organize them by “oldest first” and read from that list.
LaTeX in commentsInclude [latexpage] to render LaTeX in comments. (More.)
- Comments on Weingarten's preferred branch (4)
- Jess Riedel Also, as a separate issue, note that the "branch decomposition ambiguity" you describe in Eq.... – Nov 22, 9:41 PM
- Jess Riedel Hi Don, Thanks very much for your comments, and sorry for the big delay in... – Nov 22, 8:38 PM
- Don Weingarten Hi Jess, 1. I agree that the fuzziness of many-worlds + decoherence is simply repackaged.... – Oct 31, 11:31 PM
- Legendre transform (4)
- Branches and matrix-product states (6)
- Jess Riedel This prompted me to take an old email where I listed some models of branching... – Oct 12, 3:16 AM
- interstice Thanks. Do you know of any systems, toy or real, where we can identify a... – Oct 11, 10:06 PM
- Jess Riedel Great question. It is not obvious that we should define branches from the tensor structure... – Oct 10, 8:00 PM
- interstice Right, so to argue that the branches evolve independently, one must first define what branches... – Oct 10, 4:19 PM
- Jess Riedel You're right that the criteria for defining branches I give in the paper doesn't make... – Oct 05, 2:33 PM
- Comments on Weingarten's preferred branch (4)
foreXiv by C. Jess Riedel is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.