Epistemology, Many worlds

Against Many-Worlds

I’ve written in the past about a few of my objections to the many-worlds interpretation of quantum mechanics, for example on the NKS forum, the thread “do many-worlds believers buy lottery tickets?” I want to discuss here my favorite single objection to many-worlds (MW for short) and be specific about where I think it lands, and what it tells us.

The objection is founded on a classic paradox in probability theory called the St. Petersburg paradox. I’ll give the details below. The idea of the paradox is to show that the expectation of a probabilistic process is not the relevant practical guide to behavior toward it, when the sample size is left unspecified. More generally, operations research and optimal stopping problems are full of cases in which the sample size is a critical decision variable, with direct operational consequences for how one should deal with an uncertain outcome.

This is relevant because many worlds advocates frequently claim that their position is operationally neutral – that one would behave the same way if MW is true as if it is false. They essentially argue that the expectation is the right rational guide either way, and that this does not change across views. But the expectation is not a proper rational guide, as examples like St. Petersburg show. MW in effect posits a different value for the sample size, and this has operational consequences if actually believed. Since it does not appear to have such effects on the empirical behavior of MW advocates, one is left with two alternatives – they do not fully understand the consequences of their theory (unsurprising, and charitable), or they do not subjectively believe their own theory (also plausible), and instead advocate it in only a debator’s pose or provocative sense.

First the details of the St. Petersburg paradox. I give the traditional form first and will then reduce it to its essentials. One player offers the outcome of a game to a field of bidders – the highest bidder gets to take the “paid” position in the game, in return for paying the offering player whatever he bid. The paid position earns a chance outcome, determined by repeatedly flipping a fair coin. The payoff for a first flip of “heads” is one cent, and it doubles for each subsequent flip of “heads”, until the first “tails” outcome occurs. With the first “tails” the game ceases and the balance so far is paid to the second, purchasing player. Thus for example if the first flip is tails, player 2 gets nothing, if the first is heads and the second tails, player 2 gets 1 cent, if the first 2 flips are heads and the third tails, player 2 gets 2 cents, for 3 heads he gets 2^2 = 4 cents, and so on.

The expectation for player 2 is the sum of the infinite series of possible payoffs, which is simply equal to the number of terms times half a cent, which formally diverges. In other words the expectation is infinity, and the formal “value” of the game to player 2 is therefore infinity, and if we think a rational rule to adopt toward a probabilistic payoff is to value it at its expectation, then at the bidding stage, a group of supposedly rational players should be willing to outbid any finite bid so far. Notice, however, that all of the payoff is concentrated in the tail of the distribution of outcomes, as the product of an exponentially vanishing probability with an exponentially increasing payout. For any given bid, the probability of a positive result for player 2 can be computed and e.g. for $100, it is less than 1 out of 8000.

Economists typically use the St. Petersburg paradox to argue for adjustments to expectation-centric rules of rational dealing with risk, such as a strictly concave utility function for increments to wealth, that they want anyway for other reasons. But such adjustments don’t really get to the root of the matter. One can simply modify the payoff terms so that the payout rate grows faster than the utility function declines, after some sufficiently lengthy initial period, to insulate player 1 from any likely effect from this change.

Another frequent objection is to the “actual infinity” in the paradox, insisting that player 1 has to actually be able to pay in the astronomically unlikely event of one of the later payoffs. This can deal with the infinite tail aspect and require its truncation. However, if one then makes player 1 a sufficiently large insurance company, that really could pay a terrific amount, the probability of actually paying can still be driven below any practical limit, without recourse to actual infinities. While trivial bids might still be entertained in lottery-ticket fashion, the full expectation won’t be (with sane people using real money and bidding competitively, I mean), especially if it is stipulated that the game will be played only once. Rational men discount very low probability events, below their expectation. It only makes sense to value them at their full expectation if one is implicitly or explicitly assuming that the trial can be repeated, thus eventually reaching the expectation in a multiple-trial limit.

So just imposing a utility function with fall-off or just truncating the series will not eliminate the paradox. (The latter comes closer to doing so than the former, to be sure). The root of the matter is that the utility of improbable events is lower than the product of their value and their probability, when the sample size imposed on us is radically smaller than required to make the event even plausibly likely, in our own lifetimes. In similar fashion, rational men do not worry about getting struck by lightning or meteors; superstitious men might. It is in fact a characteristic sign of a robust practical rationality, that events below some threshold of total possibility over all the trials that will ever be made of them, are dropped outright from consideration.

Now, if MW is true, this would be irrational rather than rational. All the low probability events would occur somewhere in the multiverse that one’s consciousness might traverse and experience. Effectively MW posits the reality of a space of independent trials of every probabilistic outcome, that is unbounded, raising every unlikely outcome to the same practical status as certain ones. Certainly a MW-er may distinguish cases that occur in all possible worlds from those that branch across them, and he may practically wish to overweight those that have better outcomes in a space of larger probability measure. But he will still bid higher in the St. Petersburg game. He should buy lottery tickets, as I put it elsewhere – or at a minimum, lottery tickets that have rolled over to positive expected value (or have been engineered to have one from the start). Non-MWers, on the other hand, willingly “write” such risks – who rationally cares about the magnitude of a downside that is 100 billion to 1 against, to ever happen?

Peirce defined belief as the willingness to stake much upon a proposition. By that definition, I submit that MWers do not believe their own theory. In fact it has all the marks of a theory-saving patch, held to avoid some set of imagined possible criticisms of a valuable physical theory. I think it is superfluous for that – QM can stand on its own feet without it. I consider a more honest assessment of QM to be, that it simply underspecifies the world, or does not attempt to tell us what will actually happen, or that the determinism of wave function evolution is a conventional holdover from classical Laplacean physics. We can take this position without accepting any of the outright silliness of subjectivist interpretations of QM.

The principle noticed in this criticism, stated positively, is that the sample size posited by a theory is just as important a parameter as its expectation. Positing sample sizes that do not really occur will lead to errors in prediction and implication just as real as getting a magnetic moment wrong. And the actual, instantiated universe is sample size one – its first syllable says as much. When we refer to larger sample sizes, we are always talking about proper subsets that really are formal duplicates of each other, or we are talking about unknowns whose variation we are trying to characterize. Attempts to replace actualization with a probability measure will work if and only if the actual sample size truly involved suffices for the two to be practically close substitutes, and this is emphatically not guaranteed in general. Wherever the actual sample size is small, a probability measure view will underspecify the world and not fully characterize its practical reality, for us. And the deviation between the two widens, as the absolute probability of the events involved, falls.

Advertisements
knowledge

Against fully describing the universe

One task that philosophy can perform for scientific researchers is to criticize the logical structure of their arguments and to explore the possibility spaces they may be missing, wrapped up as they generally are in details of the content of their theories, and self – limiting as they often are, in the candidate propostions they entertain to fufill different explanatory roles within them. Philosophy takes a broader and longer view of such things, and can entertain more speculative possibilities than are generally considered in such theorizing.

Hector Zenil had an interaction with Seth Lloyd on quantum computation that he attempts to summarize on his blog. Without imputing Hector’s characterizations to Lloyd himself, I propose to criticize the argument Hector describes. Whether it is a fair characterization of Lloyd is Hector’s affair. The subject being discussed is quantum computation as a model of the universe, and the argument as Hector presents it is as follows –

“His chain of reasoning is basically as follows : a and b imply c

a) the universe is completely describable by quantum mechanics
b) standard quantum computing completely captures quantum mechanics
c) therefore the universe is a quantum computer.”

I will call the claim “a and b together imply c” the overall claim or LZ (read, “big LZ”, for Lloyd according to Zenil), and retain Hector’s labeling of the subcomponents or separate propositions, just using capitals to distinguish them from surrounding text.

The very first thing to notice is that LZ is a directional claim, and not simply an independent statement of the conclusion C. C might hold while LZ is false. Or C might be unsupported while LZ is false, or C might be false while LZ is false. Secondary claims not in evidence that independently support C or claim to, are therefore out of the scope of LZ and of consideration of it. I need not show that C is false or dubitable to refute or render dubitable the claim LZ. I will accordingly direct none of my criticism at C as a substantive claim. That leaves 3 ways in which LZ can fail – A may be false or unsupported; B may be false or unsupported; or A and B combined may not imply or support C.

While Lloyd himself may be chiefly concerned with establishing B, specifically the claim that the Turing computable version of QC can fully describe any QM system, I will direct none of my criticism at that claim. I consider it largely sound, but my reasons for doing so are tangential to LZ or to Lloyd. That intermural debate between different camps of QC operatives, the Turing complete and more than Turing claims for QC, are not relevant to my criticism. My sympathies in that dispute are with the Turing complete version of QC.

Instead my first criticism is directed at A, which I take Lloyd to consider a piece of unremarkable allegiance to a highly successful theory and nothing more, or exactly the sort of faith in theories that have withstood rigorous tests and led to new discoveries that we all honor and support etc. But I do not see the claim A that way, as stated. It isn’t a statement about QM as a theory and its great virtues compared to other theories, it is a statement about the universe and about something called *complete description*. And I deny that QM is a complete description of the universe, or even wants to be. I deny further that the universe is completely describable, full stop. I do not deny that the concept “complete description” can be well formed and can have a real domain of application – there are things that can be completely describable, and even that have been completely described (the times table up to 10 e.g.). But the universe is not one of them, and it isn’t going to be. I will elaborate more on all of this below, but first my second main criticism.

My second criticism is directed at LZ proper, or the claim that A and B even if both true, together imply C. I detect an additional unstated minor premise in this deduction, which I will formulate below and label M. I consider that unstated minor M to itself be false. I also detect an equivocation in the statement C itself – the predication O is P can mean for the copula to be understood in one of two distinct senses. Either the predicate P can be truthfully predicated of the object O, whatever else might also be predicated of O (“accidental copula” or C1, “in addition to being blue and round and crowded on Sundays, the universe also has quantum computation going on somewhere, sometime, and can therefore truthfully be said to “do” quantum computation”) or the predicate P is meant to exhaust O essentially (“being in the strong sense” or C2, “the universe *is* a quantum computer”). I will claim that the second reading of C aka C2 is unsupported by the preceding argument, and the unstated minor premise cannot sustain it. But I believe C2 is the intended reading of C. I will argue, moreover, that even C1 does not strictly follow from A and B jointly, although it is made plausible by them.

In the end, therefore, I will claim that LZ is false, whatever the case may be with C in either sense; that C1 is plausible but does not forcefully follow from A and B jointly; that the unstated minor M is false; and that independently A is also false. Without in any way denigrating QM’s status as our most successful physical theory to date, by far, nor positing any revision to it as a theory. B is untouched, C is left unsupported by LZ but is not affirmed or denied absolutely. Such in preview are my claims. It remains to explain my reasons for each claim, including the formulation of M. In order, those will be (1) what M is and why it is false; (2) why A and B jointly fail to imply more than C1 and that only suggestively, and above all (3) why A is independently false.

What minor premise is required to think C logically follows from A and B jointly?

(M) If A is “completely describable” by B, and C “completely captures” B, then A *is* C.

With suitable equivocation on “complete”, and on what is meant by “capture” and “describe”, this is a claim about the *transitivity* of *models*. And I claim it is in general false. Translations may exist from some system into another that are truth preserving but not reversible. As a first example, a translation may contain many to one mappings. B may lump together some cases that A distinguishes, and C may do the same to B, leaving C underspecific and many-valued in its “inverse image” claims about A. But it might be objected that this would apply to “capture” and “describe”, but not to either modified by “complete” – a complete description might mean no many to one mappings are allowed.

This will not help, however. I need only pick some other aspect of the original A to distort every so slightly in B – say, the number of logical operations required in B to arrive at the modeled result in A. Then there could be behaviors in A that are in principle calcuable with the resources of the universe within the time to its heat-death, that are not calculable in B within that resource limit within that time limit. Such behaviors would then have the attribute “physically realizable, in principle” in A, but not in B. To avoid inverse asymmetries of that kind, not only would one require the complete description to reach the same outcomes, but to do so in the same steps, memory, and speed. If to avoid any “miss” or slipperiness between levels of that sort, one requires the complete description to be instantiated in the same manner as what it describes, then description of the universe becomes impossible. The universe is a set of size one – that is what its first syllable means. An instantiated full copy of it is a contradiction in terms.

In other words, descriptions are always reductions or they are not descriptions, but copies of the original. But there are no actual copies of the universe, and reductions are always many to one maps. And many to one maps are not in general reversible. The unstated minor wants to read “description” as an equality sign. But description as an operator is truth-extracting, but not in general fully preservative. A description cannot say things about what it describes that are untrue, without failing as a (faithful) description. But it can readily leave things out that the thing described, contains. The terms of A attempt to avoid this possibility by saying “complete description”. But if complete is meant in a strong enough sense, a complete description is a contradiction in terms. Description means primarily discourse intended to evoke a mental image of an experienced reality, and secondarily a reduction of anything to its salient characteristics. A description of something is a predication structure – a set of true statements – or occasionally a diagram or map, preserving some formal, relational “bones” of the intended thing. An exactly similar physical instance of the original thing is not a description but a copy. No description, as distinct from a copy, is complete in the sense of invertible. For example, “this is not a body of true statements, but a physical instance” is predictable of the original but not of its description.

Leaving that model-terminological issue aside, consider instead a chain of emulations between universal computational systems, having the structure M claims between three of them, UA, UB, and UC. Then M claims “if UA can be emulated by UB, and UB and be emulated by UC, then UA *is* UC”. This is equivalent to the claim that there is only one computationally universal system. And it is false. As there is one concept of computational unversality (at least for Turing computation), a fair substitute inference M* would claim only that UA can be emulated by UC, and this would be true. Where “emulated by” allows for the usual issues of requiring a translation function and the potential slow down it may involve. Worse for LZ, M* follows regardless of whether the systems involved are the universe, or quantum anything. It would follow if the universe is Turing computable and the UC at the end of the emulation chain were rule 110. But nobody thinks this shows that the universe *is* rule 110, nor that the only alternative is to deny that the universe is Turing computable. In other words, M* applies to too much, and M doesn’t apply at all, to anything. It is just false.

Next I turn to (2), the issue of the equivocation in the sense of the copula in C. In ordinary speech, we sometimes say “X is A” where we intend merely to predicate A of X accidentally – the moon is spherical. Here it is enough that the statement be true (even roughly true, the approximate sense involved in all predication being understood), and no claim is being advanced that the essence of the moon is its spherical shape, nor are any other predicates being denied to it. At best, a few implicit constrasting predicates are being denied by implication – the moon isn’t triangular or completely flat, since those predicates contradict the true one. Call C1 the reading of C that understands the copula in this sense. C1 claims that whatever else it happens to be, the universe also has the distinct characteristics of a quantum computer. Given the encompassing definition of “universe”, this reduces to the statement that quantum computation is physically realized, anywhere and at any time. For if quantum computation is ever and anywhere physically realized, C1 holds. What else is supposed to be realizing it? Unless QC were nothing but a mathematical abstraction, C1 would hold, independent of LZ. I submit this shows that the stronger claim, C2, is in fact intended by C.

But first, leave aside the truth and inherent plausibility of even just C1, and ask whether it actually follows from A and B jointly. I submit that it does not. For B could hold even if QC *were* a mathematical abstraction, and never realized anywhere in the actual universe. QC is a theory and a description. Call QC beyond a few bits, worked up to the level of a working universal computer, UQC (U for universal). And call QC not as a theory but actually instantiated in the history of the universe with physical components, IQC (I for instantiated). Call the joint case of instantiated and universal QC, IUQC. Assume that behaviors actually occur in the universe sufficiently complex to require universality to model them (this is a more heroic assumption than it may seem, given finite nature possibilities and cardinality issues with universality, but let that slide). Then I claim that B could hold even if IUQC did not. There could be IQC, as QM provides sufficient basis for it. And there could be UQC in the realm of mathematical abstraction. But IUQC might fail to appear – if, say, all the instances of universal computation the universe actually happened to instantiate used unentangled components, and all the instances of true QC physically realized were simple enough to fall below the threshold of universality. If this does *not* occur, we will know it does not occur because of empirical evidence from actual instances of IUQC. Not as a supposedly logical consequence of B, assuming B is true. Logical consequences of possibilities of theories like B do not tell us what actually happened in the instantiated history of the universe. Observation does. Personally I think C1 is true. But I don’t think C1 follows logically from B. And it won’t be theoretical attributes of QC as a formal theory that convince me C1 is true, it will be an actual, empirical instance of IUQC. Hence my statement that C1 is plausible given A and B (or even if not given A, see below), but does not follow from them with logical necessity.

The intended meaning of C, however, is C2, strong *is*, essence or exhaustion. And while a claim of C2 might be rendered more plausible by A and B, it does not follow from them logically. Again it would be possible that QC models QM but is a mathematical abstraction that captures QM, without actually being the real, external, instantiated process that the universe follows – even if A and B were true in a suitably strong sense of “completely”. For it is one thing to say that QC models either QM or physical reality, and another to say that QC is physical reality. It might be, but this would be an independent claim and does not follow from QC modeling physical reality. Any more than rule 110 is my laptop because it can emulate any computation performed by my laptop, albeit more slowly.

In addition, C2 can fail because its stated premises fail. Leaving aside M as already adequately addressed above, we arrive at a still more basic problem. The primary premise A is also false. The universe is not fully describable by QM. And this, for multiple reasons – because QM does not even try to fully describe even the systems and relations to which it properly applies (NotA1), because QM is not all of physics (NotA2), because all of physics does not exhaust description of the universe (NotA3), and because the universe is not fully describable, full stop (NotA4).

Let us start with NotA1. Quantum mechanics is a deliberately underspecific theory. It describes the evolution of a probability amplitude, and only predicts that the absolute square of this quantity will approximate the sample average of classes of events it regards as equivalent. Strictly speaking, it can’t make contact with empirical reality (meant in the precise, exact sense of that compound term – not in a loose sense) in any other manner.

Anything is a reality, in the scholastic definition of that term, if its truth and content do not depend on a specific observor or that observor’s internal state. Loosely, realities are not matters of opinion. Anything is empirical, in the early modern sense of that term as appropriated by empiricists, if it is given directly in sense experience. An empirical reality, therefore, must be given directly to sense experience yet remain invariant across experiencers. Single observations of events of a given class described by quantum mechanics do not have this character, and QM does not claim that they do (some observors see an electron go that-a-way, and some see an electron go this-a-way). Sample averages of events described by quantum mechanics and regarded by it as equivalent to one another, do have this character. And QM deliberately restricts itself to making claims about such averages.

The universe, on the other hand, does not consist of probabilistic claims about classes of events regarded as equivalent under some theory. Definite things happen. The universe has a sample size of one, as its first syllable emphasizes. If one adopts a many-worlds interpretation of QM and regards all possible outcomes as real, one still has not explained the specific experience of oneself as an observor, which is part of the universe. Stipulate that there are a plethora of instances of oneself experiencing every possible outcome of each successive quantum event. QM does not describe which of these “you” will subjectively experience. At most, the voluntary addition to QM of a many-worlds interpretation (which is not itself a part of QM) may wave hands about the orthogonality of each of the possible experiences QM suggests could have occurred. Leaving aside the problem of probability measures across possible worlds in a universe of sample size one, QM itself is not even attempting to fully describe any selection, objective subjective or illusory, of world-lines through that branching possibility space.

Now, a doctrinaire many-worlds advocate may wish to deny the reality of anything left unspecified by QM. But he is abusing the term “reality” in doing so. It is exactly the actual historical observations shared by all inhabitants of any of his hypothetical orthogonal slices, that fit the scholastic definition of “real” – invariance over existing observors, each fully capable of talking to each other and communicating their experiences. While it is exactly his hypothetical alternate realities to which neither he nor anyone he talks to has immediate experiential access, that fail to match the definition of “empirical”. But let us leave aside the complexities of many-worlds interpretations of QM and their claims of exhaustive determinism; it would take us too far afield.

The more basic point must be seen that all descriptions as reductions are sets of true statements about a described, but that such sets are not the original being described. It is one thing to provide a map and another to provide a fully instantiated copy of the original. Descriptions as such achieve their reliability, accuracy, and usefulness by lumping some things together and making statements about the lumps that are true. QM does not attempt to describe the space-time trajectories of every electron since the begining of the universe, but instead, precisely, lumps together lots of disparate space-time events as involving a simplifying unity called “electrons”. The usefulness of a description lies in its generalities, but generalities precisely underdescribe single instances. In scholastic language, theories are about reality, while history is an actuality.

Next, NotA2, QM is not the whole of physics. There is no consistent theory of quantum gravity, but gravity is physically real. Perhaps there will someday be a consistent theory of quantum gravity, or perhaps conventional gravity theory will be found to be incorrect and will be replaced by some superior (but independent) theory. But A is a claim about QM as it stands, and QM as it stands does not even attempt to describe the gravitational behavior of bodies. QM is therefore not an exhaustive description of even physical reality.

Next, NotA3, physics and physical reality, defined as the subset of consistent relations discoverable about the material universe, do not exhaust the universe. The second syllable of “universe” refers to truth as such, and truth as such is not limited to the material world. While cosmologists these days use the term quite loosely, its proper meaning encompasses all truth of whatever character. Mathematical abstraction describes truths which are not described by physical law, but are no less true for that. Mathematical truth is more general than any specific possible world, but is smaller than and properly contained by the concept “universe”.

For that matter, there are truths about conventional categorizations, specifics of actuality, and of individual instances, that physical science even in the most general sense does not even attempt to speak about, let alone claims to fully describe, let alone actually captures fully. “Julius Caesar was a Roman politician assassinated in the senate in the year 44 BC”, is not a piece of physics, but it is a truth about the universe. It differs from physical law in the other direction, so to speak, than the mathematical truth case, above. It is more particular than the laws of physics, and refers to a single definite actuality. But it is part of the universe, just the same. The point is, physical reality lies in a specific place in terms of its generality-relations with the universe as a whole. A map isn’t the territory, nor mapping in general, nor geometry, nor logic, nor the history of the territory, nor its specific configuration at a specific instant in time, etc.

One might imagine reducing each of the categories involved in that statement to underlying physical relations, along the lines of the positivist program of trying to reduce every statement to either a statement about empirical facts or to nonsense, but this is actually hopeless. Truth is not confined to one level of description, and you could not derive many readily accessible “overlying” truths from underlyings buried deep enough below them, within the lifetime of the universe, using physically realizable amounts of computational resources. We can’t solve N body problems analytically, and quantum field theory for single large molecules is already intractable. Minimum energy configurations e.g. in protein folding are NP complete, generalized Ising models are known to be formally undecidable by reduction to known tiling problems, etc.

The idea that reduction can solve all overlying problems “in principle” is computationally naive. Overlying relations may in fact be vastly simpler than the reduction and may already encompass all the computationally important substitutions, when a process is reducible enough to actually be solved. And an accurate reduction may simply land in a morass of intractable unsolvability. The idea that ultimate and simple answers to everything lie at the bottom is simply false. It cannot withstand rational scrutiny. Since computational intractability will arise in any attempt to reformulate all problems in “lowest level” terms, QM will not succeed in exhaustively describing the universe, and other higher level descriptions will frequently succeed where a full QM treatment would be utterly hopeless from the start. One might hypothesize that in all cases of useful higher order descriptions, those descriptions are ultimately compatible with QM. But that needn’t mean they are derivable from QM in the lifetime of the universe, and in fact the higher level descriptions could be logically independent of the microlevel (the Taj Mahal is made out of bricks; this does not make the artistic character of that building a property deducible from the nature of bricks) and yet still hold for every actuality anyone will ever encounter. Enough, infatuation with the bottom level is simply that, a piece of boasting self importance from certain physicists, and not a dictate of reason.

But I am still not done with the ways in which A is false. I claim NotA4, that the universe is not fully describable, full stop. First as to mathematical truth, you aren’t going to capture all of it in one model, see Godel. Second as to historical actuality, stipulate that you have a perfect theory of all physical truth, and can moreover guess the boundary conditions of history. You still won’t be able to work out what happens inside – the consequence of your own theory – with the computational resources of only a subset of the universe. Sure, in principle you might be able to “run faster” that many subsets of the universe, by clever substitutions say, avoiding the need to mimic the universe’s own, full, computational capacity. But you are still a flyspeck on a galaxy – beyond the lightcone of everything that has ever interacted with you, or will in the next 60 billion years, you have no idea what physical immensities remain. Stipulate that you can guess averages and tendencies, and that every law you heroically generalize beyond the range of your empirical experience holds up perfectly. Completely describing the universe still requires you to give an elaborate aesthetic commentary on the exquisite electromagnetic symphonies of Hordomozart the Twenththird, of an obscure planet around an unseen star buried in the heart of an unsensed galactic supercluster 400 gazillion parsecs east-south-east of the great attractor, 46 billion years from now.

Actuality is measure epsilon in possibility-space. The most a good theory can give you is a map of generalities that will hold across wide swathes of space and time. You will not find the specific actuality that actually obtains by staring at the possibility-space such a map describes forever. At some point you have to go look at the actual world. And when you do, you will find yourself at the bottom of a well looking out through a straw at an immensity. Honesty and rationality start with an elementary humility in the face of this inescapable fact. This does not mean you can’t know things; you can. It does mean you can’t know everything.

And it means the actual evolution of the universe, actual history, is continually and meaningfully accomplishing something and adding “more”. Even if the universe follows definite laws (deterministic ones or objectively stochastic ones, either way) and even if you can guess those laws; even if the Church-Turing thesis is true and the universe is formally within the space of the computables, and even if the physical universe happens to be finite. None of those things implies “small” or “simple” or “solved”. Even a finite, computable, deterministic or QC universe will be bigger than every attempt to describe it ever achieved by all finite rational beings combined. It is vastly bigger than we are and just as complicated. Knowledge is not about getting all of it inside our heads – our heads are not big enough and would split wide open – but about getting outside of our heads and out into the midst of it, discovering definite true things about it that are helpful, for us.

Against skepticism, knowledge

Tolerance and knowledge

Although it really has nothing much to do with NKS, whenever discussing skepticism the moral argument for it comes up. I don’t find those convincing, and I think I should explain why.

Part of the attraction of arguments from epistemic weakness comes from a set of moral claims commonly advanced for them, or against the imagined contrary position of epistemic dogmatism. I don’t consider those common moral claims remotely sound, and their association with epistemic weakness is too loose to bind the two together. Roughly, people think it is intolerant to claim to know, sometimes about specific issues and sometimes anything at all, and more tolerant or nicer somehow to refrain from such claims. As though knowledge were inherently oppressive and ignorance, freedom.

At bottom I think this is a post hoc fallacy, a loose association arising from flip diagnoses of what was wrong with chosen bete noirs. So and so believed he knew something and look how awful he was, ergo… Ergo not very much. Your favorite bete noir probably also thought that 2 and 2 are 4, and never 5 nor 3. This won’t suffice to dispense with the addition table and make arithematic voluntary. Evil men had noses. This doesn’t make noses the root of all evil. Believing you know things is as normal as having a nose.

For that matter, I can comb history and find any number of convinced skeptics who were personally as nasty as you please, or even as intellectually intolerant on principle. Al Ghazali will argue from the impotence of human knowledge that philosophy should be illegal and the books of the philosophers burned. You won’t find any skeptical argument in Hume that he didn’t anticipate by centuries. But in his cultural context, it was the theologians that were the skeptics and philosophers who believed in the possibility of (human) knowledge. As this context makes clear, you need a reason to challenge entrenched convention, and if human thought cannot supply one you are left to the mercies of convention. Convention can reign without making any epistemic claims; it suffices to destroy all possible opposition.

There is a more basic problem with the idea that tolerance requires epistemic weakness. It misunderstands the reason for tolerance, and because of it will narrow its proper domain of application. The real reason for tolerance is that error is not a crime, but instead the natural state of mankind. Tolerance is tolerance for known error, or it doesn’t deserve the name. Imagine some Amish-like sect that wants to opt out of half of the modern world, and believes demonstrable falsehoods, but keeps to itself and bothers no one. What is the tolerant attitude toward such a group?

People can think what they like. You can’t require truth of them as a moral matter, because it is rarely available at all, for one, but also because truth can only be grasped internally, as a freely offered gift. You can’t make someone else think something, and it is a category error to try. Minds are free, and individual. All you can do it offer a truth (or a notion or thought), for someone else to take or leave. In the classic formulation of the age of religious wars in Europe, a conversion obtained by duress simply isn’t a conversion. Yes men err, and sometimes their errors issue in actions that are crimes. But no, you cannot eliminate the possibility of crime by first eliminating error. You couldn’t eliminate error even if you had full possession of the truth (which you don’t, to be sure). Persecution isn’t made any better if the doctrine for which you persecute is rational – it remains persecution. (The historian Ignaz Goldhizer made this point in a convincing Islamic context, incidentally).

Human beings are falliable and they are mortal. They have short lives full of personal cares, trials, and difficulties, whose incidence and urgency are peculiar to each individual. They are born in utter ignorance and dependent on their immediate fellows for most of their categories and systems of thought. They grope for knowledge because they need it in practical ways, they attain bits and pieces of it in scattershot fashion, with more found by all combined than possessed by any specific subset among them. Most knowledge stays distributed, particular, and operational – not centrally organized, general, or theoretical.

You can’t require conformity to some grand theoretical system of men in general without violence to half of them. Equally you can’t deny them the possibility of knowledge without maiming them all; humility for export isn’t a virtue. Real tolerance is a patient acceptance of these facts, a charitable and kindly view of our mutual difficulties. We offer one another such knowledge as we have found, and recipients freely take it or leave it, after judging for themselves what use it may have in their own lives. If instead you try to force everyone to acknowledge that they don’t know anything, one you are wrong because they do know all sorts of things, and two you are exactly requiring the sort of grand theoretical conformity you are pretending to be against. You end up making disagreement with your epistemological claims some sort of crime. In this case, that disagreement isn’t even an error, let alone any crime.

So at bottom, my objection to arguments in favor of epistemic weakness on the basis of its supposed tendency to further moral tolerance is that it has no such tendency, and that it misses the point of true tolerance. Which isn’t restricted to a response to ignorance. It isn’t (just) the ignorant who require tolerance, it extends to people who are flat wrong, but innocently so. The moral requirement to practice tolerance is not limited to people unsure of themselves, but extends to people who are correct and know it. The real principle of tolerance is simply that error is not a crime.

Against skepticism

Skepticism, certainty, and formal truth

The great vice of philosophy in our time is its infatuation with arguments from epistemic weakness. What might have started as careful attention to distinct categories of real knowledge has fallen into a flip dismissal of the possibility of knowledge of any kind, or restricting it to the narrowest possible compass. The contrast to the staggering achievements of our technology is blatant; never have practical engineers known so much with such precision and assurance, while both academic and popular thought loudly declare their inability to do so, on principle.

Raise the problem of knowledge with a contemporary skeptic and he will dodge knowledge in favor of “certainty” in less than a minute. When pressed he will retreat further to “absolute certainty”. He won’t be able to point to a single instance of what he means by this, however, since his desired conclusion is that it does not exist. One can in principle lay down definitions that turn out to have no domain, but generally speaking this means “so much the worse for that definition”. A distinction that doesn’t distinguish one real existing thing from another real existing thing, leaves something to be desired.

What he is really trying to distinguish is not elements of his world, but his world from what he imagines about other peoples’ worlds. He sees things inside “doubt brackets”, but the content of the world is independent of those brackets. He imagines that someone else is “seeing” certainties everywhere, despite their not actually being present. Since the contents are the same (even as far as their operational, probabilistic or betting characteristics), this amounts to castigating other people for not putting his preferred doubt brackets around all of their own thoughts. (I’ll address the moral claims often advanced for such positions in a later post).

He is of course free to bury every thought in his head in layer after layer of such brackets. But he isn’t content to do this – he demands others encumber their internal notation system in a similar fashion. At bottom this is merely rude. No doubt subjectively he is earnestly trying to save others from the tarpits of their dogmatism. Perhaps he experienced his doubt-bracketing insight as liberating or humbling and wants to share it. But soon he will be diagnosing imaginary vices and errors in anyone who refuses the rituals of his church of the extra brackets, whose sacramental efficacy he never doubts.

Occasionally one finds the less social type of skeptic who instead adopts the passive role and dares anyone to argue him out of his fortress of solitude. “Prove I know something”, he says. It is trivial to show he believes things, and does so with all the operational characteristics of knowledge, but he insists he doesn’t know them. Since at bottom this isn’t a real distinction (“real” meaning, in the good scholastic fashion, “independent of what anyone thinks of it, not a matter of opinion”), who cares? As a rational animal it is his business to know things, and if he declares bankruptcy it is his own affair. One can still notice how his position suffers all the defects Popper diagnosed in hermetic thought-systems of a more dogmatic stripe. He thinks that nothing being able to shake the certainty of his self-denial of knowledge is a strength; in fact it proves to a demonstration that said self-denial makes no contact with reality.

An older role for the distinction between knowledge and certainty used it as a real distinction, specifically dividing empirical knowledge from formal knowledge. Mathematical facts and logical syllogisms counted as certainty as well as knowledge, while empirical facts were knowable but not certain. This has much to recommend it, but is slightly too blunt for the realities it is trying to capture. The reason is, there are formal truths that effectively have the epistemic status of empirical facts, that just happen to be this way rather than that, but are “leaf like” formal facts, unconnected to broader (prior) formal truths they follow from. Gregory Chaitin has shown this using cardinality arguments. Roughly speaking, there can be more true mathematical statements within a domain or system, than ways to derive them from a limited set of prior axioms. But at least it is a real distinction.

At the level of method, I think we expand the realm where we treat things as “empiricals” still more. In NKS we are frequently making conjectures about whole categories of formal systems, well before we can have deduced enough about them to turn our knowledge of them into anything like logical certainty. Not because they aren’t at bottom purely formal systems, but because the scale of deductive work involved exceeds anything practical, not just for us but for our computers, or even for any forseeable computers running for even astronomical time. Relative to the state of our knowledge at the time of the conjecture, these have the methodological status of empirical facts.

I would claim that is the only relevant knowledge to “rate” such truths or claims, by. There is no mythical mind of God for which all logical truth, no matter how involved, is immediate and simple. For any finite mind or computational system of any description, some purely formal truths or propositions have the epistemic character of empirical facts. What we have previously thought of as the formally certain is actually a special subset of the formal, the simple or computationally reducible.

Against skepticism, there are simple and computationally reducible formal facts that can be known with certainty, in the good scholastic sense of certainty. Against the idea that the domain of certainty exactly coincides with the formal (as distinct from empirical), there are subsets of formal propositions that for all practical purposes are like empiricals, instead. Meaning, we address the latter with an empirical toolkit of conjecture and experiment and induction, of categorization based on phenomenal characteristics, applicable theorems or models, and the like. The domain of application of empirical method is broader than the scholastic distinction supposed, but there is a domain of application of pure deduction, and it does attain certainty where it applies.

Epistemology

Against skepticism

Or, a requirement of any theory of knowledge

Independent of my interest in NKS, I have philosophic objections to skepticism. I tell students that I break most of my lances against it; it is the particular philosophic position I wind up railing against. At bottom I consider skepticism a debator’s position, a dodge and evasion, and morally speaking a piece of arrogance masked as “humility for export”. (Being humble oneself may be a virtue, telling others to be humble can itself be far from humble). If the classic Socratic thesis of skepticism is that the only thing we know is that we know nothing, I claim we know no such thing.

NKS is useful for me in such discussions first because it finds a variety and complexity usually associated with empiricals, within purely formal systems. Pure NKS stands on the same ground as mathematical conjecture or long unsolved mathematical puzzles – we get pieces of pure logic that refuse to become simple “just because” they are “only” logic. Easy dicotomies that trace all epistemic difficulties to getting our formal ideas to correspond to “external” or empirical matters don’t really have a place to put such things.

This shows that the habit of equating the logical or formal with the simple is a selection bias and little else. Our past formal ideas have deliberately stayed simple enough we could noodle them out in (typically) four or five steps. In computational matters we counted “one, two, three, many”. The reality is any elaborate enough piece of pure logic, (even a strictly finite, always terminating block of it), isn’t simple at all. And a conclusion being “already entailed” by the premises won’t make it simple. This means deductive work meaningfully adds information. Hintikka is one contemporary philosopher who has noticed and tried to explore that intuitive fact, incidentally.

But against extreme skepticism about such things, they can have answers and we can find them. Not all large, involved pieces of logic are created equal. Those that inherently involve many possible cancellations or useful substitutions can be reduced by clever reasoning, or sometimes cracked completely with a piece of math. (For example, there is a simple formula operating on just the initial condition that will tell you the value of a cell later on in a rule 90 pattern, without requiring computation of all the intervening steps). Others inherently resist such methods and remain involved; only a huge amount of direct computational work can work them out. This is a phenomenal given of NKS, and before it of some sorts of math (e.g. number theory).

The philosophic issue is how well various stances about the problem of knowledge handle this phenomenal given.

Remarkably poorly, I claim. Empirical epistemology lumps everything formal into a “shouldn’t be any problem” bin, where the hard cases don’t fit. More extreme forms of skepticism lump all the formal cases into its only bin, “can’t be known”, and thus effectively predicts the simple and reducible ones won’t be readily solvable, when they obviously are. Popperian fallibilism wants there to be new information only where something can eventually prove to be wrong (traced say to a possible-worlds variability), but once actually solved any such formal puzzle is as certain as the simplest syllogism and true everywhere and always. Logical positivism largely continues to think of anything reduced to logic as solved, and glosses over everything interesting with an “in principle”. (I’ve read a book on the logic of game theory that explains in the first chapter that Go is finite, so backward induction could be used and it is therefore solved “in principle”).

But the patent fact is that some pieces of even pure math or logic can be known completely, others yield only to great computational effort, and still others won’t be solved in the lifetime of the universe. The problem of knowledge in its full variety is already present inside just one of the usual stances’ dicotomy boxes, and that the most formal.

Now, you don’ t solve a problem by evading it or denying its existence. You can’t explain how something like knowledge arises or works or can happen in the first place, by pretending it doesn’t. We hit kilometer wide windows at Mars after years in space. If a skeptic doesn’t want to call that knowledge I can play term games and call it thizzle, but thizzle still exists and needs to be explained. I’ll still be able to have thizzle about the center column of a simple CA like rule 250 (which makes a checkerboard pattern) but not about the center column of a CA out of the same rule-space, just as formal and involving just as many elements, but inherently generating complexity, like rule 30. A skeptic won’t take even very low odds on a bet against my thizzle about the former, but anyone will take them about the second, on terms indistinguisable from guessing about the purest randomness. The difference is real and operational, and whatever doubt-brackets anyone else puts around their epistemic claims, they see the same thizzle-relations as me or anyone else who understands the two systems.

The value of NKS on this subject is to focus the point of disagreement, and to clear away scores of side issues. Sensitivity analysis on the problem of knowledge rejects numerous popular theses about it. On the principle that an effect should cease when its cause ceases, we can reject various claims about how the problem of knowable vs. unknowable arises, because they predict a uniformity of knowability-status within a formal domain, where we can see there is no such uniformity. Any adequate theory of knowledge must conform to the operationally real distinctions among “readily known, reducible”, “knowable only with significant computational effort”, and “intractable”, already present in purely formal problems. If its categories aren’t fine enough to do so, then it is simply wrong.

Epistemology

Knowledge in CAs

I like the way NKS turns some existing ideas about knowables sideways.  The relevant distinctions aren’t inside or outside, model or nature, historical or natural, formal-mathematical or empirical etc.  Instead it is rule to behavior and simple or complex.  Meaning, simplicity is readily knowable whatever domain it occurs in, but complexity is hard to know about, again regardless of the domain.  This tracks our experience better than the usual epistemological puzzles, which always prove too much, and pretend we don’t know things we obviously do.

You can’t know the behavior of a simple program doing complicated things until you’ve done an awful lot of irreducible logical work, stepping through its actual behavior one to one and onto.  But you can know the behavior of a simple rule doing a simple thing, with a short cut equation, in seconds.  Some things are knowable, others are not, in precisely the same domain.  Theories of limits of knowledge that depend on domain categories would put rule 30 and rule 250 in the same box – but in fact their knowability is not the same.  The same theories put Popper’s clocks and clouds into the same category and get their knowability difference wrong, too, in an exactly parallel manner.

There may still be additional hurdles to line up formal theories to externals, to be sure.  But there are obviously knowable pockets of simplicity in both external reality and mathematics – and complex obscurities in both, as well.  The real distinction isn’t between the domains, but cuts right through all of them.  Simples are knowable and complexities are hard.

Intro

Philosophy and NKS

Wolfram’s book A New Kind of Science raises various interesting philosophy questions.  I’ll discuss some of them here.

For starters, our tendency to think of anything computational as also artificial may not be justified.  Nature has been computing for a long time, while we discovered what computation is only recently.  In philosophy, anything in the last century counts as “recently”…