finite nature

Transients matter

I am currently at the NKS 2010 summer program at the University of Vermont – a great experience as always, with lots of bright enthusiastic people. A topic of continual discussion is breaking our understanding of computation out of its “one countable infinity” frame. And not in the continuum infinity direction, but downward into finitude.

A system that bubbles around for a little bit and then resolves into just a few simple periodic structures, or straight lines running down the page, we call class 2 and dismiss as generally uninteresting. For short transients below any level of aggregation we can imagine mattering, this quick assessment may be merited. But extended indefinitely it collides with our actual experience – it is another way that one infinity over directs our thinking.

A bubbling computational system wanders around Vienna for a few dozen years and writes a few score symphonies. Then it goes remarkably stable. Ah, just a transient – class 2 behavior – uncomplex and therefore uninteresting, right? OK, how about a transient between a speck and a photon bath of non interacting light streaming about without interacting, eventually, that only took oh I don’t know, 100 trillion years to resolve?

If a transient is at least as long and interesting as you are, its temporal finitude is no reason to dismiss it – and if it is a material universe through all its immensities of space and time, its supposedly being spanned by your one imaginary countable infinity as a reason to dismiss it as uninteresting, is simply laughable.

Long enough and complicated enough transients matter. We can’t dismiss them, and if our notions of computation do not take them seriously, then our notions of computation are flat wrong and need to be revised.

Mathew Szudzik is fond of citing the Kalmar elementary or primitive recursive functions (PRF for short) as families sufficiently involved and powerful to capture practically everything we mean by computation, without crossing the threshold of universality, because they don’t have that one countable infinity. In programming terms, they have “Do” loops but not “While” loops. But how much can that matter, operationally and physically, in a transient actuality?

You have some practical computer and you know its operations speed. You want a “While” loop construct, but can only program with the PRFs. OK, so you do a little math and figure out an upper bound for the heat death of the universe in seconds, divide by your cycle speed, underprofile to pretend you can get off way more steps in your little routine than you ever actually will, and set the top of your “Do” iterator to that integer. Then put a “Throw” command inside the “Do” instead of a condition on the “While” you wanted to write.

It isn’t a While. It will provably halt. It can’t be universal. But then, no actual While running in your hardware in the actual universe will get any farther. So what operational difference can there be between them?

NKS Midwest

The computational capacity of the universe

In the afternoon session of NKS Midwest 2008, we had several remote talks by video conference. The first was by Seth Llyod, speaking on his famous calculation “the computational capacity of the universe”. He gave a nice background to the idea in thermodynamics of entropy and how it tied in with Shannon information, and then the stages he went through in coming up with it. Originally he had the question of the “ultimate laptop”, how fast could one possible get a single kilogram of matter in a one liter volume to compute, subject only to the limits of QM, arriving at the figure of 10^51 ops/second on 10^31 bits. The basic idea is then to extend this calculation to the knowable universe, and derive from it an upper bound on the logical operations (or, distinctions possible, perhaps more accurately) in the history of the universe.

When explaining entropy he got an audience question of typical epistemological bent, claiming that information is a relation between a system and us, not something about the system. He shrugged this off with the statement that entropy is info we don’t know about the system and info is info we do know about the system, but the total info is what is2nd-law non-decreasing. He was on the other hand careful to distinguish mere information processing in the sense of logical operations from computation in the sense of universality. Someone makes the claim that the universe is capable of universal computation and presents as evidence our own practical computers; they wouldn’t be able to do it if the universe cannot. There is a potential objection from cardinality, that universality needs an unbounded store, or to add memory as needed. This relates to the cosmology question whether the universe is infinitely extendable.

I thought he could have done a better job with this, as the point of his calculation is to show the universe to date is only capable of a finite number of operations, yet everything we see was able to result from it, including all of our practical computers and their actual flexibility. To me this shows that infinite cardinality is not actually required for everything we know, empirically, as either practical computation or physical complexity. Our mathematical idealizations in computation theory make use of a single countable infinity, that the real world cases are not seen to require in practice. One can say this shows our computers or the universe are not universal, or one can say the mathematical definition of universality misses somewhat. I prefer the latter, not being wedded to that abstraction.

He next explained the QM origins of the processing speed limit, both the Heisenberg limit on the time to go from one distinguishable state to another related to an energy spread, and the Margolis limit related to the absolute energy. QC has a tendency to operate at the Margolis limit, which is the most one could expect, QM being posited of course. This gives around 10^123 ops since the big bang, maximum. The maximum memory calculation on the other hand comes from a volume calculation and the amount of information that can reside on the boundary of a given volume (evolution inside said boundary being deterministic etc). The finite volume comes with the qualifier “knowable” applied to universe – take a light cone from us into the past as far as the big bang, then forward from that region at light speed. Note that in principle this means a larger present region could access more memory than “here”, and a future one likewise more than “now”.

One then asks, what is the max entropy of this much energy (bounded below by the critical density, since the universe is seen to expand) in that much volume, and one gets 10^92 bits, which he noted are about the amount stored in the black body radiation (most of it, in other words). He speculated that maybe this figure can be pushed higher when dark energy is allowed for, but that was frankly a bit hand-wavy. He wanted to point out these numbers seem to be similar to the universe age over the planck time to various powers (3/2 for 10^92 and 2 for 10^123, one hopes, since the former is about 5×10^60 planck times), but it isn’t really exact.

Overall it was a fun talk, on an argument I had read before. He was clear and the history of physics bits on Maxwell, Boltzman, Gibbs, Planck, Heisenberg, Shannon, and Margolis were interesting.

Here is what I like most about this sort of argument, from my own philosophic perspective. So often we are presented with QM indeterminacy or continuous value cardinality as reasons for expecting hypercomputation or a universe that exceeds all finite grasp, but in fact the theory itself has a quite different operational tendency, rigorously limiting the operational distinctions possible and requiring “a distinction” to have a clear physical meaning. Said clear physical meanings always having an actual “spannedness” or positive measure in both time and energy terms. In a walnutshell, if QM is true then the universe is rigorously finite in information-theoretic terms.

But two provisos have to be allowed to that statement, for the partisans of actual infinities (I think e.g. of Max Tegmark). One, Llyod is speaking of a knowable (in principle) region of space-time and not making claims about inaccessible infinities beyond it, pro or con. And two, he nowhere addressed the many-worlds interpretation or any possible “contemporary orthogonals” it might posit. So one might allow a “knowable” between “the” and “universe” in the last sentence of the previous paragraph.

NKS Midwest, reduction & emergence

Invariants, native primitives, and computation

 

Paul-Jean LeTourneau of Wolfram Research, a former student of Stuart Kaufman, gave a fine pure NKS talk in the afternoon of the first day.  I want to discuss a few of the issues it raises because they seem to me to go beyond the specific case he was analysing. 

His rule system is ECA 146, and the point he noticed in analysing it is that this rule differs from the well-known additive rule 90 only in cases of adjacent blacks within the pattern.  Which in turn are produced in rule 90 (or 146, necessarily) by runs of white of even length.  Therefore, in any region in which there are neither adjacent black cells nor even runs of white, the evolution must be identical to rule 90.  Then the idea is to track these things through various evolutions of rule 146.  And he finds persistent structures, which themselves meander about on the background, can annihilate in pairs, etc.  These are present but not obvious in the rule 146 evolution, but comparing what rule 146 and rule 90 do from the same initial lets them pop out.

What is of more general interest about this?  One question is whether these localized structures might be manipulated to get the rule to perform meaningful computations, which would be a step toward solving the “class 3 problem” in the affirmative.  The class 3 problem being the question, are there class 3 random-looking rules that are universal?  Wolfram’s principle of computational equivalence (PCE) predicts there are, but this has not been proven.  All the simple rule universality proofs to date have exploited localized structures and their predictability in constructing analogs to practical computer components.  Class 3s might have been thought to lack the necessary local stability to support programming, though PCE conjectures otherwise.  This may be of broader information-theoretic interest – class 3s are thought of as “maximum entropy” systems, and appear to be computationally irreducible.  But universality is a strictly stronger attribute than irreducibility, leaving the class 3 question – are many instances of apparent complexity reducible but not universal, or are most universal?  So this is a pure NKS issue broader than the rule system itself.

But there are others, tangential to the core concerns of NKS, perhaps, but not to philosophy and issues of reductionism.  Notice these patterns refer to invariants of the evolution of rule 146, but to invariants that exceed the scope of its native primitive states.  The case of runs of even length is a particularly fine instance of that, not being a single pattern but a whole class of them.  (Those are not, however, strictly preserved by the evolution – instead they always give rise to a black-pair particle at some point, which then is preserved, up to collisions with like particles etc).

Next notice that regions of the rule seem to behave with the same additivity as rule 90, where information always passes through unchanged, at maximum (we say, “light”) speed.  While the regions marked by non-rule-90 behaving subpatterns move more slowly, and interact in a non-information-preserving way (e.g. a particle collision leading to both ceasing, has many possible time-inverse pre-images, etc).  As though two rules were operating on the same lattice, one having the information transfer properties we associate with light, the other having the information transfer properties we associate with matter (at least, macroscopically).  But it is a single underlying rule – there are simply many possible subpatterns that behave the first way because the deviations from additivity in the rule cannot arise without special subpatterns being present.  To me this is a fine example of emergence.

Notice further that in principle the two regimes are reducible to a single underlying rule (146), but to understand its internal complex behavior it may actually be superior to decompose into a simpler rule followed “some of the time” (additive rule 90) and to analyse the behavior of the “emergent” particles (black cell pairs and their even-white-run generators) “empirically”.  Why?  Because the reducibility of the rule-90 portion of the behavior can be exploited fully, if it is separated off from the non-additive remainder.

The point is to notice that these relationships (among levels of analysis, apparent particles, reduction, simplification, “factoring” of laws, etc) can arise in a purely formal system, for entirely analytical reasons.  They are not facts about physics.  They are practical realities of formalism and analysis itself.

NKS Midwest, real numbers

Computability of real valued relations

Gilles Dowek (Ecole Polytechnique and INRIA, France) gave a nice talk trying to formalize the requirements for keeping real numbers real, in my terms.  More specifically, he was interested in the restrictions required on functions or relations that take reals to reals, so that they remain formally computable — with various distinguished levels of the latter.  He laid it all out in relation to the more familiar discrete valued cases and was quite clear.

First notice that when we are discussing functions, aka deterministic or single valued relations, computability in maps from reals to reals is exactly equivalent to continuity of the mapping.  In that case we have the typical uses of analysis, and theorems telling us that we can approximate as close as we need to with rationals, and they will converge, etc.

So next he relaxes single valued and asks, first, which of the relations of reals to reals are fully decidable.  The answer is not much, only relations that are either empty or full (the only open sets that don’t care whether you distinguish members or not). 

So relax the map category to semi-decidable, meaning that the relation is either continuous as you shrink a ball around a source point or it becomes undefined at that point.  This gives a stable concept of semi-decidable but not a very useful one, because even the identity relation is not semi-decidable (since the domain is an open set) — which I see as a fine formulation of the objection computational “constructivists” have to reals in general.

So he wants a looser concept, analogous to being effectively enumerable for discrete sets.  That is equivalent to semi-decidable in the discrete case but not with reals, where it is strictly weaker.  Effectively enumerable in the discrete case means you start from the image set, and since it is countable (as is the domain, of course), you can just impose some enumeration scheme on the image, and equate any points in the domain that map to that image-point, identifying them by their image-index, in effect.

To extend this notion to the reals, what is required is that up to some intervening index (perhaps countable), the mapping be continuous.  You can regard the extra index as a random variable or some other sort of unknown, but once it is fixed then the map must become continuous.  All deviations from deterministic continuity are ascribed to an extra variable.   This then allows a stable identity relation – it is simply a projection operator that ignores the extra hidden index.

What I liked about this formalism is it fits the use we actually make of non-determinism in real valued models (or at least machine precision valued ones, not to beg too many questions).  We interpose some probability measure or we generate random samples and branch the relation-behavior on them.  I wasn’t so keen on the name he wanted to give it, however — since both domain and range are non-denumerable, “effectively enumerable” seemed a stretch to describe the real-number analogue.   He means it to apply to the relation, of course, but it is still too confusing a name.  Constructable or model-able are perhaps the right idea, but no better as terms.  Call it EE for now.

Then his programmatic statement is that we should restrict ourselves when using non-deterministic relations of reals to only such maps as are EE, that is, such that there exists some random variable that if fixed makes the outcome fully computable, because the projection of the relation to the plane of that specific value of said random variable, is fully continuous.  He notes that if the relation is a true function, this collapses to the usual definition, leaving all deterministic-model experiments or tests unaffected, etc.

It was a point-set topology sort of talk, and quite clear if you have a background in that sort of analysis.  I found it useful for getting more precise about an intuition I’ve had for some time, and something stressed by Gregory Chaitin, that real numbers (without any continuity requirements) can have pathological properties in an information-theoretic context.  There are any number of constructions in which one encodes entire possible universes in single real numbers — Chaitin pointed out one due to Borel later in the conference, that lay behind his own Omega.  Well, when we tried to say what we meant by a real number we were thinking of the limit of a rational series, not entire histories of possible universes.  The definition evidently misses, if it means the latter when it is trying to mean only the former.  Dawek’s definitional work helps us pin down where we can safely use reals without a qualm, and where they imply a formal non-computability too extreme for rationalism.

I should also say that I spoke with Dawek repeatedly throughout the conference and found him a talented and interesting guy, with a good take on everything going on etc.  He asked the last question of the conference, left unanswered by the panel for want of time.  (There was a discussion panel of luminaries on the last day, I’ll cover it later).  He wanted to know each operative’s take on whether there could be experimental tests for the computability of the universe, and for discreteness as well.  (With some preface comments wondering if everything looks like a nail now that we know what computation is and consider it our finest new hammer – sensible question).

NKS Midwest

Internal time in dynamic graphs

Tommaso Bolognesi is working on dynamic trivalent networks, along the lines suggested by Wolfram in the NKS book.  He has presented on them in the past, and in this talk he first reviewed how they work, then got to his recent questions.  Basically he distinguishes between a global time for the model as viewed from outside, from the sense of time “experienced” (or perhaps more properly, undergone) at a specific location within one of his graphs. 

Global time simply means the model steps themselves – at each of those one local subgraph is rewritten according to a definite list of rules with a preferred order of application.  The update specifies the location of the next “active site” or the next updated node.  He tracks where this occurs by node number and notices that those rules that produce instrinsic randomness tend to revisit all nodes “fairly”, in an average or statistical sense.  While globally the network may show exponential growth, for example, if the spacings out of revisits keep pace, the size of the graph at each (global) step on which a given node is updated, appears to grow linearly.  Similarly for a rule system that showed square root growth globally – again the “internal” sense of time is of linear growth.  It is a feature of fair revisitation that local updates on a graph growing at any speed appear as linear growth in a frame that regards each self-update as a clock-tick, and ignores all other updates.

He also discussed constructing the causal network of site updates, by tracking the faces shared by a given node-update.  I suggested linking those backward through time at the update time-slices given by the “effected” node, to get an internal sense of locality, as well.

knowledge

Finite does not mean small

Cardinality issues block and confuse discussions of NKS and the significance of universality.
Some QM fans stump for more than Turing computation as physically real based on the notion that
access to continuum cardinalities can break the limited fetters of countable computables.
Others stick to Turing computables but make more than practical use of its own countable
infinity and span more than anyone will ever span, and pretend things are the same or have
been reduced to one another, sub species infinitum. We even see men educated in computational
complexity theory speak as though anything exponential corresponds to uncountables while
anything polynomial corresponds to countably infinities. Against all these loose associations,
it is necessary to insist forcefully that finite does not mean small. Even finite computation
exceeds all realizable grasp. Against cosmologists dreaming of towers of continuum
infinities and microscopists strident for infinitessimal distinction, both as the supposed
origin of limitations on knowledge or uncertainty in the external world, I must insist that
even denying all such claims a purely finite, discrete, and computable universe has nothing
simple about it. The operative cause of a limit on exhaustive knowledge is not the
hypothetical presence of infinite cardinals of any description, but follows simply and directly
from the term “universe”, and our existing minimal knowledge of its scope and complexity, and
is there already even if every such infinity is denied. People need to give up the equation
between “finite” and “simple”; it is a mere mistake. And if this is established it is
already enough to show that no appeal to experienced limits of knowledge can count as evidence
of any kind, for a real existence for any such hypothetical infinities.

I present a mesh to cover the practical universe and allow for its possible laws and regularities. The
smallest spatial distinction we think can matter for physical behaviors is the planck length.
But I allow for underlying generators below that scale, 100 orders of magnitude smaller. We
experience 3 large scale spatial dimensions, but some theories employ more; I’ll allow for
10,000. We differentiate a zoo of a few score particles, themselves understood as quanta of
fields, on that space. I’ll allow a million, no need to skimp. The smallest unit of time
we think likely to have physical meaning is the planck time, but that’s too long. Slice the
time domain in a manner unique at every single location as specified above, into time units
100 orders of magnitude shorter than the planck time. We can see tens of billions of light
years in any direction, but extend this outward 1000 fold, and allow for every location from
which a light signal might enter a forward position on our light cone 800,000 billion years
in the future, as the spatial extent we care about. Now let us consider every possible field
value for every possible hypothetical field quanta at each such location – quanta to the power
of the number of locations. Those are states. Now let us consider their transitions one
after another, not as compressed by some definite law, but the pure power set, any state can
go to any other state, as a purely formal and one-off transition rule. Allow this transition
to be multivalued and indeterministic, such that the same exact prior state can go to
literally any other distinct subsequent, or otherwise put, multiple by the number of time
instances at each location as though they are all independent. Any regularity actually seen
is a strict compression on this possibility space. Now add elaborate running commentary on
all events as they happen, in 3000 billion billion languages, surrounding the physical text.
All independent and voluminous, billions of times larger than all human thought to date, about
each infinitessimal instance. Don’t worry about where or how the last is instantiated, let
it float above reality in a platonic mathematical realm.

I am still measure zero in the integers. I can fit pure occasionalism in that mesh. I can
fit any degree of apparent indeterminism you can imagine. I can fit all possible physical
theories, true approximate or completely false. But it is all discrete and computable and
moreover, finite. All the realized computations of all the physically realized intelligences
in the history of the slice of the universe observable by us and all of our descendents or
successors for hundred of thousands of billions of years, along with all their aides or
computational devices, cannot begin to span that possibility space – but it is strictly
finite. So, what is it I am supposed to detect operationally, that I can’t fit into a theory
within that mesh, or above it? Notice, I didn’t even posit determinism let alone locality or
the truth of any given theory. It is enough if I can characterize a state by millions of values
at each of an astronomical number of locations. If supposedly I can’t, then no operational
theory is possible period. If any operational theory is possible, it will be strictly less
fine or exhaustive than the thought-experiment mesh given, and strictly more determined or
restrictive, as to transitions that actually occur according to that theory. I further note
that the mesh given is already completely intractable computationally, not because it is
formally noncomputable or has halting problems, let alone because of higher order infinities.
No, it is intractable already, for all finite intelligences and anything they will ever know,
without a single countable infinity. Naturally this does not preclude the possibility of
tractable, even more finite models. But it does show that intractability arises for reasons
of pure scale, within finitude.

Finite simply doesn’t mean small, nor simple.

Against skepticism, Epistemology

Against simulation

I want to trash the idea that we are living in a computer simulation. I will specifically examine Bostrom’s argument that either advanced civilizations don’t run simulations, or most civilizations go extinct, or we are living in a simulation. I will show that anyone who believes his argument is forced to believe in the flying spaghetti monster as well, and in any other item of superstitious nonsense that anyone wishes to impose upon the credulous. He reprises Pascal’s wager, misuses the notion of a Bayesian prior, and falls into cardinality pitfalls as old as Zeno. In passing I will slander Hanson’s more limited claim that at least it is not impossible that we are living in a simulation, explain a few philosophy background items for the Matrix, and defend instead a robust form of the Kantian transcendental deduction – we are living in the universe, which is actual and not an illusion; even illusions live in the actual universe.

First the form of Bostrom’s argument. He claims that one of the following is true, he does not decide which –

(B1) the human species is very likely to go extinct before reaching a “posthuman” stage;

(B2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history;

(B3) we are almost certainly living in a computer simulation.

The form he desires for the conclusion of this supposedly necessary triparition is, “It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor – simulations is false, unless we are currently living in a simulation.”

Notice, on the surface he does not claim to argue independently that any of the three is more plausible than the others, let alone that all three hold. But in fact, he really comes down for B1 or B3, offering the alternatives “we are in a sim or we are all going to die”, and his money is on the “sim” answer.

The idea is either civilizations generally die out, are uninterested in simulation, or we are in one already. The first B1 is meant to be his catastrophe-mongers “out” and a possible resolution of the Fermi paradox (why aren’t the super smart aliens already here? — they are all dead, they routinely destroy themselves as soon as they have the tech to do so). While he is personally quite given to apocalyptic fantasies of this sort, the “very likely” in the phrasing is a give-away that he regards this is less plausible than B3. Some, sure, nearly all, not so much.

The second he actually regards as extremely implausible. On an ordinary understanding of human interests, if we can we probably would, and B2 requires an “extremely” modifying unlikely, as even a tiny chance that a society with the means would also add the desire is sufficient to generate astronomical numbers of sims. B3 is meant to be plausible given a denial of B1. Why? He counts imaginary instances and decides reality is whatever he can imagine more instances, of.

The primary fallacy lies there, and it can be seen by simply imagining a wilder premise and then willfully alleging a bit more cardinality. The flying spaghetti monster universes are much more numerous than the simulations, because the FSM makes a real number of entirely similar universes for each infinitessimal droplet of sauce globbed onto each one of his continuous tentacles. These differ only in what the FSM thinks about each one of them, and they are arranged without partial order of any kind. The sims, on the other hand, are formally computable and therefore denumerable and therefore a set of measure zero in the infinite sea of FSM universes. My infinity is bigger than your infinity. Na nah nana na.

This shows the absurdity of counting imaginaries without prior qualification as to how plausible they are individually. Worse, it shows that we have here just another species of Pascal’s wager, in which a highly implausible premise is said to be supported by a supposedly stupendous but entirely alleged consequence, which is supposed to multiply away any initial implausibility and result in a practical certainly of… of whatever. The argument in no way depends on the content of the allegation – simulation, deity created universe, dreams of Vishnu, droplets of FSM sauce. It is merely a free standing machine for imposing stupendous fantasies on the credulous, entirely agnostic as to the content to be imposed.

As for B1, well we are all going to die. Sorry, welcome to finitude. Doesn’t mean the civilization will, but we aren’t the civilization, we are mortal flies of summer. Theologically speaking it may be a more interesting question whether we stay dead, or whether the Xerox corporation has a miraculous future ahead of it, post human or not, simulated or lifesized. But one can be completely agnostic about that eventual possibility without seeing any evidence for simulation-hood in the actuality around us.

Hanson, a much lesser light, breathlessly insists instead that at least the simulation hypothesis is not impossible. This truly isn’t saying very much. According to the medievals, the position one ends up in denying only the impossible but regarding anything else as equally possible is called Occasionalism. Anything possible can happen and it is entirely up to God, the occasionalists claimed. They denied any natural necessity as limitations on God’s transcendent freedom. Logic and math could be true in all possible worlds, but everything else could change the next instant as the programmer-simulator changed his own rules. This anticipated Hume’s skeptical denial of causality by at least five centuries. The modern forms add nothing to the thesis; I’ll take my al-Ghazali straight, thank you very much.

The immediate philosophical background of these ideas and their recent popularization in the Matrix is the brain in a vat chesnut of sophmore skepticism, and its more illustrious predecessor, Descartes’ consideration in the Meditations whether there is anything he can be certain about. Descartes posits an evil genius deliberately intent on deceiving him about everything, and decides that his understanding of his own existence would survive the procedure. This evil genius has metastasized in the modern forms. As an hypothesis it actually has gnostic roots (caught between a world they despised and a supposedly just omnipotence, they required an unjust near-omnipotence in the way to square their circle), though come to that an occasionalist God would fit Descartes’ idea to a tee. Descartes himself, on the other hand, is quite sure that no intentional deceiver of that sort would deserve the more honorable title.

Why is the evil genius evil? Why is the brain-in-a-vat manager managing brains in vats? Why are the alien simulators studiously absent from their simulations? And why do their make believe realities feature so little of the harps and music and peaceful contemplation of which imaginations of paradise abound, and so much of the decrepitude and banal horror of actual history? No, they don’t need batteries – no civilization capable of such things would have material exploitative motives. One might allow for a set of small measure to discover things; beyond that the sim managers presumably actually have preferences and are fully capable of realizing them. Naturally you should therefore pray to them for a benign attitude toward your own trials and tribulations. The flying spaghetti monster appreciates a fine sauce.

The original Matrix at least kept up the dualism between a technological mask over a theological world and a theological mask over a technological world. The sequels couldn’t maintain it and collapsed into their own action movie absurdities, but the ability in principle to maintain the dualism is a better guide to the argument’s actual tendency. Which is superstition for technopagans.

Against those signs of contemporary intellectual flacidity, a Kant argued on essentially Cartesian grounds that we could arrive at an at least minimalist piece of information about the actual world beyond our interior of experience – that it exists and is real and structures the possibilities of experience. We need not go so far as to accept his characterization of space and time as necessary forms of intuition to see the soundness of this point. Even illusions happen in some real world, and “universe” refers to that ultimate true ground, and not to any given intervening layer of fluff.

Every robust intelligence starts from the actuality of the world, from feet firmly planted on the ground, not airy fantasies between one or another set of ears. The impulse to look everywhere for fantastic possibility instead is a desire for fiction. If you don’t like the truth, make something up. It is a power-fantasy, but shows remarkably little sensitivity to the question what any intelligent being would want to do with power of that sort. The morals of such pictures are a farce, from the gnostics on. But it is enough to notice that the entire subject is a proper subset of historical theology.