What Lies Beneath? The Role of Informal and Hidden Networks in the
Management of Crises (2014), Financial Accountability & Management,
Vol. 30, Issue 3, pp.259-278
A piece of organisation theory which is simultaneously vague, ugly, repetitive, and trivial. Welcome to the fourth-hand, corporatized end-point of Merton and Latour: the desert of the firm.
The paper's fatuousness can be found at many levels: from the overall repetition (the same badly-conceptualised ideas stated ten times) to nonsequitur passages like this, to its sentences, most of which are formally crude and intellectually empty:
Prediction is based on both known and unknown factors, and thus the organisation’s ability to capture relevant information and make informed judgements on which to base their predictions, becomes essential.
There are issues around determining the legitimacy of knowledge and the social construction of risks.
The paper's stated aim is 'to set out the problem space for information capture and analysis within organisations' – so, merely to offer theory. They fail even in this modest goal. For apparently serious academics addressing serious risks in healthcare, it does not suffice to blankly state possibilities in this banal manner, with no model, no quantification, no causality, and no calibration. This is systems theory without the wonderment and generality, economics without rigour, sociology without dissent.
The challenge for professionals and managers within organisations is, therefore, to recognise and reconceptualise the destructive capacity of informal networks (the ‘dark’ side of networks), particularly in light of the non-knowledge or unobservable transmission of information that is nonetheless strategically and operationally essential to organisations in terms of protection and mitigation of risk.
The only justification given for the uninteresting claims they repeat ten times are 1) other social theorists as bad as them, and 2) two sensational case studies (horsemeat and cheap silicone) that don’t really bear on the issue - probably because they haven’t studied either directly.
It is not I think exaggeration to summarise their 20-page organisational theory as as the following, without significant loss of content:
Our question is basically: formal processes, like meetings, don’t capture a lot of the risk-relevant information that the organisation’s people actually have.
(Ok, yeah, that’s not a question: we think we already know this, and we can name lots of other organisational theorists who think they do too, so there.)
Organisations only manage those factors that their managers can measure. In particular, risk assessment. Risk assessment is hard. A lot goes on ‘informally’, over the heads of managers. We think this is stuff like water-cooler talk and minor crises that don’t get reported, but we haven’t actually studied it, so who knows. It’s a good idea to think about the ‘zone of non-observability’, also known as ‘basically knowing you’re not omniscient’. Here is a trivial and ugly graph that we present as a significant innovation.
We use the word ‘sociotechnical network’ a lot. We think it’s a really neat idea – imagine like people and information and computers all forming like one big system with like information traffic! Like a computer network, of people!
You know those Management Information Systems everyone’s got these days? Imagine if that one data system didn’t have absolutely all of the information and knowledge that 200 experienced people have!
We’re not going to go ahead and say that we want organisations to be run as panopticons but we may imply it heavily.
Have you heard of ‘globalisation’? We bet it’s got something to do with it!
Why am I so worked up? The reason to pay the paper any attention at all is that it's a good instance of a grave and general problem in social theory. I mean the likes of this:
Of importance within this literature has been the role of socio-technical networks in shaping the supply of information, verifying and challenging assumptions around decision parameters, and in identifying early warnings of the potential failures associated with particular decisions (Ballinger, et al., 2011; Cross et al., 2006; and Jansen et al., 2011). In many respects, socio-technical networks are a central dynamic of organisational performance, particularly in the context of human services such as healthcare (Doolin, 1999). Here, an organisation’s dynamic capabilities (Augier and Teece, 2008; and Barreto, 2010) are a function of the individuals and teams that interact together to deal with the demands of the ‘problem space’ (Boisot, 995; and Boisot and Child, 1999).or
At its core, knowledge is constructed (or rejected) as a function of how we make sense of what we ‘observe’ in the world around us (Weick, 2001; and Weick and Sutcliffe, 2001). We tend to select or accept information that suits our needs, and that we recognise as relevant to our interests (Boisot, 1995; Collingridge and Reeve, 1986; and Taylor, 2000).
Call it hype-citation. It occurs whenever the main or only method of supporting one's points is the citation of other work – when appeal to the literature is taken as sufficient to justify points. This is detestable for a series of reasons: 1) it is simple fallacious appeal to authority; 2) it is often just brutal appropriation of past work – by omitting quotations, it's implied that the cited work agrees fully with the present authors. 3) Even when theory is citing empirical studies, it minimises the real work of research, by omitting all the messy concerns: research design, evidence-gathering, measurement calibration, validity analysis - i.e. the legwork that the citing authors need not concern themselves with. Lastly, 4) flat citation usually implies that the cited work deals conclusively with the claim: even if the cited work does support the point at hand, hype-citation encourages endemic over-confidence, by putting the evidence's flaws behind a veil.
But the worst of it is that it actually makes true the false the floating signifiers view of discourse, which so much social theory still seems to be based upon. That is, their cynicism about humans' ability to track the truth leads them to act in a manner that does not at all track truth, but just points endlessly inward, floating on self-sustaining discursive currents. They replace contact with the world (e.g. by experiment, survey, ethnography) with... contact with more social theory. And this navel-gaving is treated as if it grounded their claims.
To be clear: I think citation is very important: few topics have absolutely no precedents, and it’s dishonest to pretend they do; few pieces of work could get anywhere without scads of ideas from others*. Without the page-exporting function of citations, every paper would be a book (and they’re already far too long, as it is). Among real scholars, there is a division of intellectual labour both honest and functional – if A has produced evidence for claim C with a good methodology, I can point to her work and have it count as justification for claim Cb.
At its best, citation makes research fractal, with each paper’s footnotes and bibliography a sub-network that allows in outsiders to their complex domains. We take hyperlinking for granted now – but semantic tagging of this sort was and is an astonishingly labour-intensive and fiddly task, with familiar giant benefits. So, citation is important, which makes the tendency above actively unethical, rather than just more ignorable shoddiness on the part of ‘social theory’.**
I think it part of a wider failure in these fields to be critical if the work at hand is nominally critical. (Of society, of ‘the paradigm’, of a straw man status quo, whatever.) A single contrary turn against the default is deemed to be enough. Or:
the 'masochism' of accepting a new theory [that says that you are radically deluded] is just a stage, after which we get to claim to have transcended our brainwashing, and to feel that we've joined a vanguard; a little pocket of knowledge in a corrupt and stupid world.
This new brainwashing - the arrogance of the self-conscious theoretical élite - is far harder to rinse away. woe betide us.
- Non-sequitur: The first of the following paragraphs implies that the next will introduce new and interesting applications of their hidden networks idea. Instead they simply repeat the vague and obvious objections to omniscient straw-man PRA:
Another important aspect of informal, hidden networks is that highly sensitive information that is essential to organisational performance may find more opportunities for ‘leakage’ or, in some cases, early warnings of problems might not be picked up by those who are in a position to take action to prevent escalation. The development and maintenance of lines of communication between members of the organisation, needs therefore, to be a key focus of managerial attention. Whilst many organisations would agree with this, there are several important barriers to effective implementation, especially when dealing with issues around risk.
Firstly, risk assessment by its very nature involves predicting the likelihood of an event occurring and taking subsequent steps to mitigate those risks. Prediction is based on both known and unknown factors, and thus the organisation’s ability to capture relevant information and make informed judgements on which to base their predictions, becomes essential. Much of this information is, however, complex and requires interpretation and analysis by experts. Much of it also lies within the zone of unobservability, exchanged within hidden networks. This interpretation may, under certain conditions, generate the potential for future risks as the decisions taken on the basis of a flawed perspective will serve to shape the control methods put into place.
A second issue, and one that is also important in relation to issues of expertise, is the hierarchy within which information is collated, shared...
- Some glib Strong social construction to boot:
the knowledge communicated may be inaccurate, decontextualized, or out of date, thereby leading to inappropriate actions.
(No: inaccurate knowledge is not knowledge)
knowledge is a selection of certain (generalized) distinctions for observation from all possible ones. Thus, the reverse side of knowledge is an exclusion of distinctions for observation; an exclusion of possibilities of observation (Seidl, 2007, p. 20).
* (See what happened to Descartes when he tried.)
** The ethics of research I have in mind - never claim anything without actually looking - is very demanding, and very unappealing, and ornery as hell. But it might prevent much of the bullshit, and an unspecified amount of the world’s bad decisions, and therefore suffering.