Or, a requirement of any theory of knowledge
Independent of my interest in NKS, I have philosophic objections to skepticism. I tell students that I break most of my lances against it; it is the particular philosophic position I wind up railing against. At bottom I consider skepticism a debator’s position, a dodge and evasion, and morally speaking a piece of arrogance masked as “humility for export”. (Being humble oneself may be a virtue, telling others to be humble can itself be far from humble). If the classic Socratic thesis of skepticism is that the only thing we know is that we know nothing, I claim we know no such thing.
NKS is useful for me in such discussions first because it finds a variety and complexity usually associated with empiricals, within purely formal systems. Pure NKS stands on the same ground as mathematical conjecture or long unsolved mathematical puzzles – we get pieces of pure logic that refuse to become simple “just because” they are “only” logic. Easy dicotomies that trace all epistemic difficulties to getting our formal ideas to correspond to “external” or empirical matters don’t really have a place to put such things.
This shows that the habit of equating the logical or formal with the simple is a selection bias and little else. Our past formal ideas have deliberately stayed simple enough we could noodle them out in (typically) four or five steps. In computational matters we counted “one, two, three, many”. The reality is any elaborate enough piece of pure logic, (even a strictly finite, always terminating block of it), isn’t simple at all. And a conclusion being “already entailed” by the premises won’t make it simple. This means deductive work meaningfully adds information. Hintikka is one contemporary philosopher who has noticed and tried to explore that intuitive fact, incidentally.
But against extreme skepticism about such things, they can have answers and we can find them. Not all large, involved pieces of logic are created equal. Those that inherently involve many possible cancellations or useful substitutions can be reduced by clever reasoning, or sometimes cracked completely with a piece of math. (For example, there is a simple formula operating on just the initial condition that will tell you the value of a cell later on in a rule 90 pattern, without requiring computation of all the intervening steps). Others inherently resist such methods and remain involved; only a huge amount of direct computational work can work them out. This is a phenomenal given of NKS, and before it of some sorts of math (e.g. number theory).
The philosophic issue is how well various stances about the problem of knowledge handle this phenomenal given.
Remarkably poorly, I claim. Empirical epistemology lumps everything formal into a “shouldn’t be any problem” bin, where the hard cases don’t fit. More extreme forms of skepticism lump all the formal cases into its only bin, “can’t be known”, and thus effectively predicts the simple and reducible ones won’t be readily solvable, when they obviously are. Popperian fallibilism wants there to be new information only where something can eventually prove to be wrong (traced say to a possible-worlds variability), but once actually solved any such formal puzzle is as certain as the simplest syllogism and true everywhere and always. Logical positivism largely continues to think of anything reduced to logic as solved, and glosses over everything interesting with an “in principle”. (I’ve read a book on the logic of game theory that explains in the first chapter that Go is finite, so backward induction could be used and it is therefore solved “in principle”).
But the patent fact is that some pieces of even pure math or logic can be known completely, others yield only to great computational effort, and still others won’t be solved in the lifetime of the universe. The problem of knowledge in its full variety is already present inside just one of the usual stances’ dicotomy boxes, and that the most formal.
Now, you don’ t solve a problem by evading it or denying its existence. You can’t explain how something like knowledge arises or works or can happen in the first place, by pretending it doesn’t. We hit kilometer wide windows at Mars after years in space. If a skeptic doesn’t want to call that knowledge I can play term games and call it thizzle, but thizzle still exists and needs to be explained. I’ll still be able to have thizzle about the center column of a simple CA like rule 250 (which makes a checkerboard pattern) but not about the center column of a CA out of the same rule-space, just as formal and involving just as many elements, but inherently generating complexity, like rule 30. A skeptic won’t take even very low odds on a bet against my thizzle about the former, but anyone will take them about the second, on terms indistinguisable from guessing about the purest randomness. The difference is real and operational, and whatever doubt-brackets anyone else puts around their epistemic claims, they see the same thizzle-relations as me or anyone else who understands the two systems.
The value of NKS on this subject is to focus the point of disagreement, and to clear away scores of side issues. Sensitivity analysis on the problem of knowledge rejects numerous popular theses about it. On the principle that an effect should cease when its cause ceases, we can reject various claims about how the problem of knowable vs. unknowable arises, because they predict a uniformity of knowability-status within a formal domain, where we can see there is no such uniformity. Any adequate theory of knowledge must conform to the operationally real distinctions among “readily known, reducible”, “knowable only with significant computational effort”, and “intractable”, already present in purely formal problems. If its categories aren’t fine enough to do so, then it is simply wrong.