Gilles Dowek (Ecole Polytechnique and INRIA, France) gave a nice talk trying to formalize the requirements for keeping real numbers real, in my terms. More specifically, he was interested in the restrictions required on functions or relations that take reals to reals, so that they remain formally computable — with various distinguished levels of the latter. He laid it all out in relation to the more familiar discrete valued cases and was quite clear.
First notice that when we are discussing functions, aka deterministic or single valued relations, computability in maps from reals to reals is exactly equivalent to continuity of the mapping. In that case we have the typical uses of analysis, and theorems telling us that we can approximate as close as we need to with rationals, and they will converge, etc.
So next he relaxes single valued and asks, first, which of the relations of reals to reals are fully decidable. The answer is not much, only relations that are either empty or full (the only open sets that don’t care whether you distinguish members or not).
So relax the map category to semi-decidable, meaning that the relation is either continuous as you shrink a ball around a source point or it becomes undefined at that point. This gives a stable concept of semi-decidable but not a very useful one, because even the identity relation is not semi-decidable (since the domain is an open set) — which I see as a fine formulation of the objection computational “constructivists” have to reals in general.
So he wants a looser concept, analogous to being effectively enumerable for discrete sets. That is equivalent to semi-decidable in the discrete case but not with reals, where it is strictly weaker. Effectively enumerable in the discrete case means you start from the image set, and since it is countable (as is the domain, of course), you can just impose some enumeration scheme on the image, and equate any points in the domain that map to that image-point, identifying them by their image-index, in effect.
To extend this notion to the reals, what is required is that up to some intervening index (perhaps countable), the mapping be continuous. You can regard the extra index as a random variable or some other sort of unknown, but once it is fixed then the map must become continuous. All deviations from deterministic continuity are ascribed to an extra variable. This then allows a stable identity relation – it is simply a projection operator that ignores the extra hidden index.
What I liked about this formalism is it fits the use we actually make of non-determinism in real valued models (or at least machine precision valued ones, not to beg too many questions). We interpose some probability measure or we generate random samples and branch the relation-behavior on them. I wasn’t so keen on the name he wanted to give it, however — since both domain and range are non-denumerable, “effectively enumerable” seemed a stretch to describe the real-number analogue. He means it to apply to the relation, of course, but it is still too confusing a name. Constructable or model-able are perhaps the right idea, but no better as terms. Call it EE for now.
Then his programmatic statement is that we should restrict ourselves when using non-deterministic relations of reals to only such maps as are EE, that is, such that there exists some random variable that if fixed makes the outcome fully computable, because the projection of the relation to the plane of that specific value of said random variable, is fully continuous. He notes that if the relation is a true function, this collapses to the usual definition, leaving all deterministic-model experiments or tests unaffected, etc.
It was a point-set topology sort of talk, and quite clear if you have a background in that sort of analysis. I found it useful for getting more precise about an intuition I’ve had for some time, and something stressed by Gregory Chaitin, that real numbers (without any continuity requirements) can have pathological properties in an information-theoretic context. There are any number of constructions in which one encodes entire possible universes in single real numbers — Chaitin pointed out one due to Borel later in the conference, that lay behind his own Omega. Well, when we tried to say what we meant by a real number we were thinking of the limit of a rational series, not entire histories of possible universes. The definition evidently misses, if it means the latter when it is trying to mean only the former. Dawek’s definitional work helps us pin down where we can safely use reals without a qualm, and where they imply a formal non-computability too extreme for rationalism.
I should also say that I spoke with Dawek repeatedly throughout the conference and found him a talented and interesting guy, with a good take on everything going on etc. He asked the last question of the conference, left unanswered by the panel for want of time. (There was a discussion panel of luminaries on the last day, I’ll cover it later). He wanted to know each operative’s take on whether there could be experimental tests for the computability of the universe, and for discreteness as well. (With some preface comments wondering if everything looks like a nail now that we know what computation is and consider it our finest new hammer – sensible question).