Epistemology, Many worlds

Against Many-Worlds

I’ve written in the past about a few of my objections to the many-worlds interpretation of quantum mechanics, for example on the NKS forum, the thread “do many-worlds believers buy lottery tickets?” I want to discuss here my favorite single objection to many-worlds (MW for short) and be specific about where I think it lands, and what it tells us.

The objection is founded on a classic paradox in probability theory called the St. Petersburg paradox. I’ll give the details below. The idea of the paradox is to show that the expectation of a probabilistic process is not the relevant practical guide to behavior toward it, when the sample size is left unspecified. More generally, operations research and optimal stopping problems are full of cases in which the sample size is a critical decision variable, with direct operational consequences for how one should deal with an uncertain outcome.

This is relevant because many worlds advocates frequently claim that their position is operationally neutral – that one would behave the same way if MW is true as if it is false. They essentially argue that the expectation is the right rational guide either way, and that this does not change across views. But the expectation is not a proper rational guide, as examples like St. Petersburg show. MW in effect posits a different value for the sample size, and this has operational consequences if actually believed. Since it does not appear to have such effects on the empirical behavior of MW advocates, one is left with two alternatives – they do not fully understand the consequences of their theory (unsurprising, and charitable), or they do not subjectively believe their own theory (also plausible), and instead advocate it in only a debator’s pose or provocative sense.

First the details of the St. Petersburg paradox. I give the traditional form first and will then reduce it to its essentials. One player offers the outcome of a game to a field of bidders – the highest bidder gets to take the “paid” position in the game, in return for paying the offering player whatever he bid. The paid position earns a chance outcome, determined by repeatedly flipping a fair coin. The payoff for a first flip of “heads” is one cent, and it doubles for each subsequent flip of “heads”, until the first “tails” outcome occurs. With the first “tails” the game ceases and the balance so far is paid to the second, purchasing player. Thus for example if the first flip is tails, player 2 gets nothing, if the first is heads and the second tails, player 2 gets 1 cent, if the first 2 flips are heads and the third tails, player 2 gets 2 cents, for 3 heads he gets 2^2 = 4 cents, and so on.

The expectation for player 2 is the sum of the infinite series of possible payoffs, which is simply equal to the number of terms times half a cent, which formally diverges. In other words the expectation is infinity, and the formal “value” of the game to player 2 is therefore infinity, and if we think a rational rule to adopt toward a probabilistic payoff is to value it at its expectation, then at the bidding stage, a group of supposedly rational players should be willing to outbid any finite bid so far. Notice, however, that all of the payoff is concentrated in the tail of the distribution of outcomes, as the product of an exponentially vanishing probability with an exponentially increasing payout. For any given bid, the probability of a positive result for player 2 can be computed and e.g. for $100, it is less than 1 out of 8000.

Economists typically use the St. Petersburg paradox to argue for adjustments to expectation-centric rules of rational dealing with risk, such as a strictly concave utility function for increments to wealth, that they want anyway for other reasons. But such adjustments don’t really get to the root of the matter. One can simply modify the payoff terms so that the payout rate grows faster than the utility function declines, after some sufficiently lengthy initial period, to insulate player 1 from any likely effect from this change.

Another frequent objection is to the “actual infinity” in the paradox, insisting that player 1 has to actually be able to pay in the astronomically unlikely event of one of the later payoffs. This can deal with the infinite tail aspect and require its truncation. However, if one then makes player 1 a sufficiently large insurance company, that really could pay a terrific amount, the probability of actually paying can still be driven below any practical limit, without recourse to actual infinities. While trivial bids might still be entertained in lottery-ticket fashion, the full expectation won’t be (with sane people using real money and bidding competitively, I mean), especially if it is stipulated that the game will be played only once. Rational men discount very low probability events, below their expectation. It only makes sense to value them at their full expectation if one is implicitly or explicitly assuming that the trial can be repeated, thus eventually reaching the expectation in a multiple-trial limit.

So just imposing a utility function with fall-off or just truncating the series will not eliminate the paradox. (The latter comes closer to doing so than the former, to be sure). The root of the matter is that the utility of improbable events is lower than the product of their value and their probability, when the sample size imposed on us is radically smaller than required to make the event even plausibly likely, in our own lifetimes. In similar fashion, rational men do not worry about getting struck by lightning or meteors; superstitious men might. It is in fact a characteristic sign of a robust practical rationality, that events below some threshold of total possibility over all the trials that will ever be made of them, are dropped outright from consideration.

Now, if MW is true, this would be irrational rather than rational. All the low probability events would occur somewhere in the multiverse that one’s consciousness might traverse and experience. Effectively MW posits the reality of a space of independent trials of every probabilistic outcome, that is unbounded, raising every unlikely outcome to the same practical status as certain ones. Certainly a MW-er may distinguish cases that occur in all possible worlds from those that branch across them, and he may practically wish to overweight those that have better outcomes in a space of larger probability measure. But he will still bid higher in the St. Petersburg game. He should buy lottery tickets, as I put it elsewhere – or at a minimum, lottery tickets that have rolled over to positive expected value (or have been engineered to have one from the start). Non-MWers, on the other hand, willingly “write” such risks – who rationally cares about the magnitude of a downside that is 100 billion to 1 against, to ever happen?

Peirce defined belief as the willingness to stake much upon a proposition. By that definition, I submit that MWers do not believe their own theory. In fact it has all the marks of a theory-saving patch, held to avoid some set of imagined possible criticisms of a valuable physical theory. I think it is superfluous for that – QM can stand on its own feet without it. I consider a more honest assessment of QM to be, that it simply underspecifies the world, or does not attempt to tell us what will actually happen, or that the determinism of wave function evolution is a conventional holdover from classical Laplacean physics. We can take this position without accepting any of the outright silliness of subjectivist interpretations of QM.

The principle noticed in this criticism, stated positively, is that the sample size posited by a theory is just as important a parameter as its expectation. Positing sample sizes that do not really occur will lead to errors in prediction and implication just as real as getting a magnetic moment wrong. And the actual, instantiated universe is sample size one – its first syllable says as much. When we refer to larger sample sizes, we are always talking about proper subsets that really are formal duplicates of each other, or we are talking about unknowns whose variation we are trying to characterize. Attempts to replace actualization with a probability measure will work if and only if the actual sample size truly involved suffices for the two to be practically close substitutes, and this is emphatically not guaranteed in general. Wherever the actual sample size is small, a probability measure view will underspecify the world and not fully characterize its practical reality, for us. And the deviation between the two widens, as the absolute probability of the events involved, falls.

Advertisements