08/05/2015

Is your methodology an adaptive preference?


If a person knows he is being denied an opportunity... he can never be quite certain whether his lack of desire for it is shaped by the fact that it is unavailable to him (“sour grapes”). That gnawing uncertainty counts as a harm.

– Jon Elster



ONE hot summer’s day a Fox was strolling through an orchard till he came to a bunch of Grapes just ripening on a vine which had been trained over a lofty branch. “Just the things to quench my thirst,” quoth he. Drawing back a few paces, he took a run and a jump, and just missed the bunch.

Turning round again with a One, Two, Three, he jumped up, but with no greater success. Again and again he tried after the tempting morsel, but at last had to give it up, and walked away with his nose in the air, saying: “I am sure they are sour.”

– Aesop



A research programme that would be very illuminating and thus very unpopular: How much does someone’s methodology have to do with their rationalising their own particular abilities? Does not having the skill to conduct either quantitative or qualitative research correlate with denying its value? Given that people very often adjust their desires to their opportunities, and given that methodology should ride on higher things, I propose a trio of studies to check the academic community’s hygiene:

Sour grapes: Disparaging or emphasising the limits of quantitative reason because you yourself are bad at maths.

Mind-blindness*: Disparaging or emphasising the limits of qualitative reason because you yourself are bad at criticism or phenomenology.

Scoundrel bastions: What fields do people with neither competence flock to?

One could use the SAT or GRE to obtain a proxy of verbal and mathematical reasoning ability; people would object to this, 1) rightly because timed tests are a super-artificial measure of research ability – they can prove ability, but they can’t really disprove real-life ability – and 2) wrongly, because it threatened their status.

Null hypothesis: no groups have differing aptitudes, and therefore all groups are equally trustworthy.

By combining the two studies within-subjects, we could derive a general factor of adaptive methodological rationalisation**: how much a given person is swayed by their own lack of skill. This could be a proxy for how rationally they conduct themselves in general.

If this predatory estimation spread and were made public, one could use this correlation factor to calibrate beliefs in a given methodologist’s opinion. This is not new: I respect Putnam’s and Rorty's criticisms of positivist reason because I know they are profoundly skilled in logic; I trust Deirdre McCloskey much more in her postmodern/Eisenhowerian stance because she was both a quantitative historian and a socialist in her youth.


* They're both kinds of sour grapes, but distinguish them early for clarity. The inverted forms – seeing what you're good at as a superior insight into the world ("sweet lemons" and "the great projector") – are as important, but hopefully get captured in the first correlation.

** Or so we’d call it in papers: to each other we could leave it as ‘sour grapesiness’.



*********************************************************************************


Good methodology can substitute for brilliance: if you follow the scientific method long enough, you will find stuff out, almost regardless of your acuity or creativity*. If you can understand an algorithm's steps, you can perform incredibly complex mathematics given only patience and a pen. (Or wings.) In programming, object-oriented languages enforce a simple stepped method that allows total numpties to make, well, most of the internet.

Relatedly: to have the studies produce results of lasting worth – rather than results for wreaking retribution on idle methodologists – we'd want to track the things that practitioners did. (Though is there any such thing as a practitioner, in philosophy?)


* e.g. an unfortunate demonstration of this: Thomas Midgley and tetraethyl lead: "At war’s end he resumed his search for a gasoline additive, systematically working his way through promising elements in the periodic table, and in 1921 he and his team found that minute amounts of tetraethyl lead completely eliminated engine knock."

Four years of dumb permutation!

*********************************************************************************


Hold on! To make sense of the world, we have math. Who needs algorithms? It is beyond dispute that the dizzying success of 20th century science is, to a large degree, the triumph of mathematics. A page's worth of math formulas is enough to explain most of the physical phenomena around us: why things fly, fall, float, gravitate, radiate, blow up, etc. As Albert Einstein said, “The most incomprehensible thing about the universe is that it is comprehensible.” Granted, Einstein's assurance that something is comprehensible might not necessarily reassure everyone, but all would agree that the universe speaks in one tongue and one tongue only: mathematics.

But does it, really? This consensus is being challenged today. As young minds turn to the sciences of the new century with stars in their eyes, they're finding old math wanting. Biologists have by now a pretty good idea of what a cell looks like, but they've had trouble figuring out the magical equations that will explain what it does. How the brain works is a mystery (or sometimes, as in the case of our 43rd president, an overstatement) whose long, dark veil mathematics has failed to lift.

Economists are a refreshingly humble lot—quite a surprise really, considering how little they have to be humble about. Their unfailing predictions are rooted in the holy verities of higher math. True to form, they'll sheepishly admit that this sacred bond comes with the requisite assumption that economic agents, also known as humans, are benighted, robotic dodos—something which unfortunately is not always true, even among economists.

A consensus is emerging that, this time around, throwing more differential equations at the problems won't cut it. Mathematics shines in domains replete with symmetry, regularity, periodicity—things often missing in the life and social sciences. Contrast a crystal structure (grist for algebra's mill) with the World Wide Web (cannon fodder for algorithms). No math formula will ever model whole biological organisms, economies, ecologies, or large, live networks. Will the Algorithm come to the rescue? This is the next great hope. The algorithmic lens on science is full of promise—and pitfalls.

First, the promise. If you squint hard enough, a network of autonomous agents interacting together will begin to look like a giant distributed algorithm in action. Proteins respond to local stimuli to keep your heart pumping, your lungs breathing, and your eyes glued to this essay—how more algorithmic can anything get? The concomitance of local actions and reactions yielding large-scale effects is a characteristic trait of an algorithm. It would be naive to expect mere formulas like those governing the cycles of the moon to explain the cycles of the cell or of the stock market.

Contrarians will voice the objection that an algorithm is just a math formula in disguise, so what's the big hoopla about? The answer is: yes, so what? The issue here is not logical equivalence but expressibility. Technically, number theory is just a branch of set theory, but no one thinks like that because it's not helpful. Similarly, the algorithmic paradigm is not about what but how to think. The issue of expressiveness is subtle but crucial: it leads to the key notion of
abstraction and is worth a few words here (and a few books elsewhere).



*********************************************************************************

More specifically:

My saying 'methodology' in the above makes the point seem irrelevant to anyone but academics or devoted autodidacts. (The word only really denotes the formal and contrived ways that we act when we know we'll have to face scrutiny.) But the implications go way beyond those islands in the sun of peer-review. (To the grody places in which most thought lives.)

  • Computer science: methodology quantitative of necessity.
  • Philosophy: methodology largely qualitative. Everyone's a methodologist.
  • though with a distinct subculture of utter quants (meta-quants).
  • Economics: methodology overwhelmingly aping that of real quantitative fields.



* Using ‘quantitative’ to handle formal logic here, even though most such logic doesn’t ever require the concept number.


01/05/2015

Among the worst papers I have ever read:

 

What Lies Beneath? The Role of Informal and Hidden Networks in the
Management of Crises (2014), 
Financial Accountability & Management,
Vol. 30, Issue 3, pp.259-278



"It is easy to lie with statistics. It is hard to tell the truth without it."

– Andrejs Dunkels



A piece of organisation theory which is simultaneously vague, ugly, repetitive, and trivial. Welcome to the fourth-hand, corporatized end-point of Merton and Latour: the desert of the firm.

The paper's fatuousness can be found at many levels: from the overall repetition (the same badly-conceptualised ideas stated ten times) to nonsequitur passages like this, to its sentences, most of which are formally crude and intellectually empty:

Prediction is based on both known and unknown factors, and thus the organisation’s ability to capture relevant information and make informed judgements on which to base their predictions, becomes essential.
or this:
There are issues around determining the legitimacy of knowledge and the social construction of risks.

The paper's stated aim is 'to set out the problem space for information capture and analysis within organisations' – so, merely to offer theory. They fail even in this modest goal. For apparently serious academics addressing serious risks in healthcare, it does not suffice to blankly state possibilities in this banal manner, with no model, no quantification, no causality, and no calibration. This is systems theory without the wonderment and generality, economics without rigour, sociology without dissent.

The challenge for professionals and managers within organisations is, therefore, to recognise and reconceptualise the destructive capacity of informal networks (the ‘dark’ side of networks), particularly in light of the non-knowledge or unobservable transmission of information that is nonetheless strategically and operationally essential to organisations in terms of protection and mitigation of risk.

The only justification given for the uninteresting claims they repeat ten times are 1) other social theorists as bad as them, and 2) two sensational case studies (horsemeat and cheap silicone) that don’t really bear on the issue - probably because they haven’t studied either directly.

It is not I think exaggeration to summarise their 20-page organisational theory as as the following, without significant loss of content:

Our question is basically: formal processes, like meetings, don’t capture a lot of the risk-relevant information that the organisation’s people actually have.
(Ok, yeah, that’s not a question: we think we already know this, and we can name lots of other organisational theorists who think they do too, so there.)

Organisations only manage those factors that their managers can measure. In particular, risk assessment. Risk assessment is hard. A lot goes on ‘informally’, over the heads of managers. We think this is stuff like water-cooler talk and minor crises that don’t get reported, but we haven’t actually studied it, so who knows. It’s a good idea to think about the ‘zone of non-observability’, also known as ‘basically knowing you’re not omniscient’. Here is a trivial and ugly graph that we present as a significant innovation.


We use the word ‘sociotechnical network’ a lot. We think it’s a really neat idea – imagine like people and information and computers all forming like one big system with like information traffic! Like a computer network, of people!
You know those Management Information Systems everyone’s got these days? Imagine if that one data system didn’t have absolutely all of the information and knowledge that 200 experienced people have!

We’re not going to go ahead and say that we want organisations to be run as panopticons but we may imply it heavily.

Have you heard of ‘globalisation’? We bet it’s got something to do with it!


********************************************************************


Why am I so worked up? The reason to pay the paper any attention at all is that it's a good instance of a grave and general problem in social theory. I mean the likes of this:

Of importance within this literature has been the role of socio-technical networks in shaping the supply of information, verifying and challenging assumptions around decision parameters, and in identifying early warnings of the potential failures associated with particular decisions (Ballinger, et al., 2011; Cross et al., 2006; and Jansen et al., 2011). In many respects, socio-technical networks are a central dynamic of organisational performance, particularly in the context of human services such as healthcare (Doolin, 1999). Here, an organisation’s dynamic capabilities (Augier and Teece, 2008; and Barreto, 2010) are a function of the individuals and teams that interact together to deal with the demands of the ‘problem space’ (Boisot,  995; and Boisot and Child, 1999).
or
At its core, knowledge is constructed (or rejected) as a function of how we make sense of what we ‘observe’ in the world around us (Weick, 2001; and Weick and Sutcliffe, 2001). We tend to select or accept information that suits our needs, and that we recognise as relevant to our interests (Boisot, 1995; Collingridge and Reeve, 1986; and Taylor, 2000).

Call it hype-citation. It occurs whenever the main or only method of supporting one's points is the citation of other work – when appeal to the literature is taken as sufficient to justify points. This is detestable for a series of reasons: 1) it is simple fallacious appeal to authority; 2) it is often just brutal appropriation of past work – by omitting quotations, it's implied that the cited work agrees fully with the present authors. 3) Even when theory is citing empirical studies, it minimises the real work of research, by omitting all the messy concerns: research design, evidence-gathering, measurement calibration, validity analysis - i.e. the legwork that the citing authors need not concern themselves with. Lastly, 4) flat citation usually implies that the cited work deals conclusively with the claim: even if the cited work does support the point at hand, hype-citation encourages endemic over-confidence, by putting the evidence's flaws behind a veil.

But the worst of it is that it actually makes true the false the floating signifiers view of discourse, which so much social theory still seems to be based upon. That is, their cynicism about humans' ability to track the truth leads them to act in a manner that does not at all track truth, but just points endlessly inward, floating on self-sustaining discursive currents. They replace contact with the world (e.g. by experiment, survey, ethnography) with... contact with more social theory. And this navel-gaving is treated as if it grounded their claims.

To be clear: I think citation is very important: few topics have absolutely no precedents, and it’s dishonest to pretend they do; few pieces of work could get anywhere without scads of ideas from others*. Without the page-exporting function of citations, every paper would be a book (and they’re already far too long, as it is). Among real scholars, there is a division of intellectual labour both honest and functional – if A has produced evidence for claim C with a good methodology, I can point to her work and have it count as justification for claim Cb.

At its best, citation makes research fractal, with each paper’s footnotes and bibliography a sub-network that allows in outsiders to their complex domains. We take hyperlinking for granted now – but semantic tagging of this sort was and is an astonishingly labour-intensive and fiddly task, with familiar giant benefits. So, citation is important, which makes the tendency above actively unethical, rather than just more ignorable shoddiness on the part of ‘social theory’.**

I think it part of a wider failure in these fields to be critical if the work at hand is nominally critical. (Of society, of ‘the paradigm’, of a straw man status quo, whatever.) A single contrary turn against the default is deemed to be enough. Or:

the 'masochism' of accepting a new theory [that says that you are radically deluded] is just a stage, after which we get to claim to have transcended our brainwashing, and to feel that we've joined a vanguard; a little pocket of knowledge in a corrupt and stupid world.

This new brainwashing - the arrogance of the self-conscious theoretical élite - is far harder to rinse away. woe betide us.



*******************************************************************

CITATIONS


  • Non-sequitur: The first of the following paragraphs implies that the next will introduce new and interesting applications of their hidden networks idea. Instead they simply repeat the vague and obvious objections to omniscient straw-man PRA:
    Another important aspect of informal, hidden networks is that highly sensitive information that is essential to organisational performance may find more opportunities for ‘leakage’ or, in some cases, early warnings of problems might not be picked up by those who are in a position to take action to prevent escalation. The development and maintenance of lines of communication between members of the organisation, needs therefore, to be a key focus of managerial attention. Whilst many organisations would agree with this, there are several important barriers to effective implementation, especially when dealing with issues around risk.
    Firstly, risk assessment by its very nature involves predicting the likelihood of an event occurring and taking subsequent steps to mitigate those risks. Prediction is based on both known and unknown factors, and thus the organisation’s ability to capture relevant information and make informed judgements on which to base their predictions, becomes essential. Much of this information is, however, complex and requires interpretation and analysis by experts. Much of it also lies within the zone of unobservability, exchanged within hidden networks. This interpretation may, under certain conditions, generate the potential for future risks as the decisions taken on the basis of a flawed perspective will serve to shape the control methods put into place. 
    A second issue, and one that is also important in relation to issues of expertise, is the hierarchy within which information is collated, shared...

  • Some glib Strong social construction to boot:
    the knowledge communicated may be inaccurate, decontextualized, or out of date, thereby leading to inappropriate actions.
    (No: inaccurate knowledge is not knowledge)

    knowledge is a selection of certain (generalized) distinctions for observation from all possible ones. Thus, the reverse side of knowledge is an exclusion of distinctions for observation; an exclusion of possibilities of observation (Seidl, 2007, p. 20).


    * (See what happened to Descartes when he tried.)


    ** The ethics of research I have in mind - never claim anything without actually looking - is very demanding, and very unappealing, and ornery as hell. But it might prevent much of the bullshit, and an unspecified amount of the world’s bad decisions, and therefore suffering.