Technical books often use seemingly nontechnical, apparently normative terms: you're marching through your dense and spidery notation, and suddenly you tread in a gob of ordinary language. Some of the most important concepts in the formal sciences are of this sort, in fact:

*well-behaved*. "not weird; having all properties suitable for the present study; not in violation of any of the assumptions we just made". One of the big offenders, used everywhere and never defined truly, only by context. Usually "well-behaved compared to an unrestricted superset we don't want to handle right now".*well-defined*. "unambiguous; blessed with just one interpretation". One of the core differences between the formal sciences and other enquiry. Terminology in other fields is nowhere near as clear as this (not even ones which*seem*highly formalised, like Spinoza's*Ethics*or Wittgenstein's*Tractatus*or half of Spencer-Brown's*Laws of Form**).

Why is well-definedness helpful? One reason is that there are no researcher degrees of freedom, which is to say that the results of well-defined enquiery are fully reproducible, definitely not bullshit (conditional on no errors getting through).

The temptation is to call work in those fields*not even wrong*; for, if your rules are ambiguous, they cannot specify mathematical functions, and thus can never use the awesome machineries of truth known as Analysis and Computation. (They can, however, provide never-ending controversies - ink for the ink mill, authors for the author mill.)

(see also*well-formed*in logic, meaning "syntax compliant"; and*well-specified*in American math and theoretical computer science: "sufficiently precise to be implemented in a general programming language").*embarrassingly*. Roughly: "surprisingly easily". Writing distributed code is a neat and torturous art, often involving heavy functional analysis. But some operations - like counting elements, or matrix multiplication - are completely trivial to break into unordered subtasks, thus embarrassing the compsci PhD who is tasked with it. Very close to "distributive".*almost always*: "P=1, except in the case of infinite sample spaces". Now, this*looks*like probabilists suddenly turning all hand-wavy and saying "IT'S BASICALLY DEFINITE, shut up shut up shut up". But it is actually used for infinite sets, where you can have theoretically possible events with probability tending to 0 (but not strictly 0). (see also "*almost all*- "every member except for this finite set of members" - and*almost everywhere*)*eventually*. "after some finite time or iterations; sufficiently large". (Yes: in between sheaves of equations you will see people saying "almost surely eventually correct".)*With high probability*: This one actually is "basically definite".*probably approximately correct*. In the evaluation of machine learning functions: "neither under- nor over-fitted, as right as can be, with high probability".*arbitrarily*. "how ever". No matter how large the number you pick.*by abstract nonsense*. "using category-theoretic arguments which I take all of you to be familiar with"

* Half of philosophy is the attempt to make large, old, awful concepts well-defined in this high sense (as they put it: "to give necessary and sufficient truth-conditions for"). Now, it has been truly and sadly noted that mathematics is the subfield of philosophy that humans are good at - the only one we can successfully define in. But it's an unfair fight: mathematicians get to invent all the clean concepts they need, and ignore anything that doesn't fit; philosophers are duty-bound to encompass incoherent, foolish, and artefactual nuances of legacy ones. They are to be admired all the more for persisting in the face of total generational failure (and also teasing).

## No comments:

## Post a Comment