Skip to main content

Is your methodology an adaptive preference?


If a person knows he is being denied an opportunity... he can never be quite certain whether his lack of desire for it is shaped by the fact that it is unavailable to him (“sour grapes”). That gnawing uncertainty counts as a harm.

– Jon Elster



ONE hot summer’s day a Fox was strolling through an orchard till he came to a bunch of Grapes just ripening on a vine which had been trained over a lofty branch. “Just the things to quench my thirst,” quoth he. Drawing back a few paces, he took a run and a jump, and just missed the bunch.

Turning round again with a One, Two, Three, he jumped up, but with no greater success. Again and again he tried after the tempting morsel, but at last had to give it up, and walked away with his nose in the air, saying: “I am sure they are sour.”

– Aesop



A research programme that would be very illuminating and thus very unpopular: How much does someone’s methodology have to do with their rationalising their own particular abilities? Does not having the skill to conduct either quantitative or qualitative research correlate with denying its value? Given that people very often adjust their desires to their opportunities, and given that methodology should ride on higher things, I propose a trio of studies to check the academic community’s hygiene:

Sour grapes: Disparaging or emphasising the limits of quantitative reason because you yourself are bad at maths.

Mind-blindness*: Disparaging or emphasising the limits of qualitative reason because you yourself are bad at criticism or phenomenology.

Scoundrel bastions: What fields do people with neither competence flock to?

One could use the SAT or GRE to obtain a proxy of verbal and mathematical reasoning ability; people would object to this, 1) rightly because timed tests are a super-artificial measure of research ability – they can prove ability, but they can’t really disprove real-life ability – and 2) wrongly, because it threatened their status.

Null hypothesis: no groups have differing aptitudes, and therefore all groups are equally trustworthy.

By combining the two studies within-subjects, we could derive a general factor of adaptive methodological rationalisation**: how much a given person is swayed by their own lack of skill. This could be a proxy for how rationally they conduct themselves in general.

If this predatory estimation spread and were made public, one could use this correlation factor to calibrate beliefs in a given methodologist’s opinion. This is not new: I respect Putnam’s and Rorty's criticisms of positivist reason because I know they are profoundly skilled in logic; I trust Deirdre McCloskey much more in her postmodern/Eisenhowerian stance because she was both a quantitative historian and a socialist in her youth.


* They're both kinds of sour grapes, but distinguish them early for clarity. The inverted forms – seeing what you're good at as a superior insight into the world ("sweet lemons" and "the great projector") – are as important, but hopefully get captured in the first correlation.

** Or so we’d call it in papers: to each other we could leave it as ‘sour grapesiness’.



*********************************************************************************


Good methodology can substitute for brilliance: if you follow the scientific method long enough, you will find stuff out, almost regardless of your acuity or creativity*. If you can understand an algorithm's steps, you can perform incredibly complex mathematics given only patience and a pen. (Or wings.) In programming, object-oriented languages enforce a simple stepped method that allows total numpties to make, well, most of the internet.

Relatedly: to have the studies produce results of lasting worth – rather than results for wreaking retribution on idle methodologists – we'd want to track the things that practitioners did. (Though is there any such thing as a practitioner, in philosophy?)


* e.g. an unfortunate demonstration of this: Thomas Midgley and tetraethyl lead: "At war’s end he resumed his search for a gasoline additive, systematically working his way through promising elements in the periodic table, and in 1921 he and his team found that minute amounts of tetraethyl lead completely eliminated engine knock."

Four years of dumb permutation!

*********************************************************************************


Hold on! To make sense of the world, we have math. Who needs algorithms? It is beyond dispute that the dizzying success of 20th century science is, to a large degree, the triumph of mathematics. A page's worth of math formulas is enough to explain most of the physical phenomena around us: why things fly, fall, float, gravitate, radiate, blow up, etc. As Albert Einstein said, “The most incomprehensible thing about the universe is that it is comprehensible.” Granted, Einstein's assurance that something is comprehensible might not necessarily reassure everyone, but all would agree that the universe speaks in one tongue and one tongue only: mathematics.

But does it, really? This consensus is being challenged today. As young minds turn to the sciences of the new century with stars in their eyes, they're finding old math wanting. Biologists have by now a pretty good idea of what a cell looks like, but they've had trouble figuring out the magical equations that will explain what it does. How the brain works is a mystery (or sometimes, as in the case of our 43rd president, an overstatement) whose long, dark veil mathematics has failed to lift.

Economists are a refreshingly humble lot—quite a surprise really, considering how little they have to be humble about. Their unfailing predictions are rooted in the holy verities of higher math. True to form, they'll sheepishly admit that this sacred bond comes with the requisite assumption that economic agents, also known as humans, are benighted, robotic dodos—something which unfortunately is not always true, even among economists.

A consensus is emerging that, this time around, throwing more differential equations at the problems won't cut it. Mathematics shines in domains replete with symmetry, regularity, periodicity—things often missing in the life and social sciences. Contrast a crystal structure (grist for algebra's mill) with the World Wide Web (cannon fodder for algorithms). No math formula will ever model whole biological organisms, economies, ecologies, or large, live networks. Will the Algorithm come to the rescue? This is the next great hope. The algorithmic lens on science is full of promise—and pitfalls.

First, the promise. If you squint hard enough, a network of autonomous agents interacting together will begin to look like a giant distributed algorithm in action. Proteins respond to local stimuli to keep your heart pumping, your lungs breathing, and your eyes glued to this essay—how more algorithmic can anything get? The concomitance of local actions and reactions yielding large-scale effects is a characteristic trait of an algorithm. It would be naive to expect mere formulas like those governing the cycles of the moon to explain the cycles of the cell or of the stock market.

Contrarians will voice the objection that an algorithm is just a math formula in disguise, so what's the big hoopla about? The answer is: yes, so what? The issue here is not logical equivalence but expressibility. Technically, number theory is just a branch of set theory, but no one thinks like that because it's not helpful. Similarly, the algorithmic paradigm is not about what but how to think. The issue of expressiveness is subtle but crucial: it leads to the key notion of
abstraction and is worth a few words here (and a few books elsewhere).



*********************************************************************************

More specifically:

My saying 'methodology' in the above makes the point seem irrelevant to anyone but academics or devoted autodidacts. (The word only really denotes the formal and contrived ways that we act when we know we'll have to face scrutiny.) But the implications go way beyond those islands in the sun of peer-review. (To the grody places in which most thought lives.)

  • Computer science: methodology quantitative of necessity.
  • Philosophy: methodology largely qualitative. Everyone's a methodologist.
  • though with a distinct subculture of utter quants (meta-quants).
  • Economics: methodology overwhelmingly aping that of real quantitative fields.



* Using ‘quantitative’ to handle formal logic here, even though most such logic doesn’t ever require the concept number.


Comments