Skip to main content

multifarious miscellany



Stable breeder in Conway's Game of Life. Original author one 'Hyperdeath'


One of my computer science lecturers does philosophy in passing while discussing superficially unphilosophical things like Harvard vs Princeton and the gubbins of molecular computing. (Molecular as in Hofstadter's comment: "Looking at a program written in machine language is vaguely comparable to looking at a DNA molecule atom by atom.") It is gigantic stuff:

  • "People always define computers as 'data-processing machines' - which they are not and cannot be, because data are mental events. Machines process representations - and all this is is us using the physical world to help us with the mental world we have such limited range within (usually to help us with the physical world we have such limited control over). (This is also why the infinite cannot be properly represented, because there is nothing usably physical for the purpose.) (In the reified field that gets called "Computing", we happen to use voltages to represent mental events, but I seriously encourage you to consider computing less narrowly; you will never be right, otherwise.)

    Human language being what it is, the category error of "data" as data representation has by now been thoroughly inscribed. We - the truth - lost. So you'll hear me say that 'the system is processing data'. But I don't mean it."

  • "We write programs* with a program** which feeds a program*** that produces the usable program**** - and all of these are running on a metaphor for a machine^."

  • "Transmission is strictly limited by each component, of course. If you send a message faster than either the transmitter can pulse, than the channel can discretely convey, or the receiver-decoder can parse, it will be lost, or useless, or worse. (Faulty data is worse than no data because it deceives us.)"

  • (If you consider the message a thought, the wire human language, the decoder your poor interlocutor!)

  • "You might think that your high-level source code is a long-winded way of handling data. In reality - that is, in the processor..."

- Lewis Mackenzie


* Source code.
** Editor.
*** Compiler.
**** Executable.
^ Operating system.


*******************************************************************




Consequentialism makes morals all a matter of policy decisions, and so every individual person a government (a government of themselves). This stance appalls a diverse set of people in slightly different ways - but for now I just want you to notice that this is Hobbes' Leviathan metaphor ^^ backwards.

(Similarly, consider the noble literal meaning of "constituency": the term wants us to think of a political seat as constituted of its constituents; the seat is supposedly nothing without them.)

(Tenuously related: Luciano Floridi's exciting idea of people as 'multi-agent systems', in unibus pluram. Everyone a bunch of pieces lashed together and with all the usual crippling information asymmetries and moral hazard of such organisations.)


****************************************************************


Hard ideas I use without really understanding them (that, is, without strictly knowing their definition or the full dynamics of their components):
  • “Distribution” (in statistics)
  • “Object” (in programming)
  • “Deixis” (in linguistics
    - 'indexical' in philosophy)
  • "Twistor" (in physics)
  • “Being” (in anything)
  • Free play” (in France)
  • “Mind”
Loads of people understand the first four, so there is hope. And the last three may be entirely meaningless, so there is hope.


****************************************************************


At uni I became very worried that getting really invested in any field would block one's understanding of most of the world, cos, well, 'if all you have is a hammer'. Some excellent jaded people have coined words for this: trained incapacity, professional deformation, or, most dramatically, occupational psychosis.

The thought was that each field only gets at one piece of reality*, and that the assumptions of fields may prevent you grasping results in others (in some way above just the study of one taking up your time for the others). Since people are bad at changing their minds, and soon get tribal about their field, I worried that people would go through life with one methodological lens, the exact one based on a near-arbitrary decision taken as a teenager (enrollment). And this would be absurd.

As it is I needn't have worried, because the sheer indifference of my peers prevented such high methodological concerns.** Even the Arts kids – who should really possess some passion or grand programme, given the economic opportunity cost of their choice of programme*** – showed minimal affinity, with most never speaking in tutorials and scarcely reading anything. (Larger evidence for this suspicious economism below.****)

Glum hypothesis: Professional deformation isn't a problem, because what people actually do upon graduating is sell their books and try to never think about it ever again. And they prosper.


* My kind are supposed to accept but one whole world with a future grand reduction of all things to fundamental physics. But an equally crunchy alternative is offered by Turing's notion of levels of abstraction: just as there's no question of motion without a set frame of reference, it might be that there's no talk of explanation without a specified level of abstraction. The physical, the social, the mental and the formal all seem to occupy different LoAs (though I'm very open to the idea that the last two might end up the same one, as in e.g. this bold jaunt).

** A striking instance in the first week of class: The lecturer, a world authority on Dickens, read out the Want and Ignorance segment from Christmas Carol, and then wept openly at the front of the lecture. The chatter on the way out was just bemused and dismissive. Two lads in the row behind me laughed.

*** e.g. Page 29-30 here, Page 8 here.

**** e.g. UNESCO reckon that 32% of the right age bracket enrolled in some kind of formal higher education in 2012; but only a tiny fraction of that number sign up for the same content when it's free but with no big formal certificate at the end. (And 90% of that tiny fraction actually drop out.) Something like 17% of all Americans have a degree.

Actually, I can believe that 17% of all people are somewhat intellectually engaged - after some knowledge for knowledge's sake - but they maybe don't overlap much with 'degree-holders'.


****************************************************************


No human has ever been a rationalist, not on the harsh definition (‘Person who lets nothing but evidence determine their beliefs and follows up the rest properly’). So we say ‘aspiring rationalist’, or ‘approximating the rational’. Somewhat less than maximally perverse.


****************************************************************

Isn’t one’s pain quotient shocking enough without fictional amplification, without giving things an intensity that is ephemeral in life & sometimes even unseen? Not for some. For some very, very few that amplification, evolving uncertainly out of nothing, constitutes their only assurance, & the unlived, the surmise, fully drawn in print on paper, is the life whose meaning comes to matter most.
- Philip Roth


The root of the word 'happy' is the word 'hap' (that is, luck). i.e., They were once regarded as the same, i.e. There was no way to make yourself happy; you just had to be very lucky to be happy. And still?


****************************************************************

The motives which move them may not be bad at all; they are often quite decent ones like prudence, loyalty, self-fulfilment and professional conscientiousness. The appalling element lies in the lack of the other motives which ought to balance these — in particular, of a proper regard for other people and of a proper priority system which would enforce it. That kind of lack cannot be treated as a mere matter of chance. Except in rare psychopaths, we attribute it to the will.
- Mary Midgley


So for instance belonging enables ostracism; thought enables illusion; empathy enables apologism; irreverence enables shallowness; security enables complacency; curiosity enables weapons; independence enables loneliness; pure reason enables monsters; taste enables snobs; gumption enables evil.


****************************************************************


New HMHB album! This has excited me so much that I spontaneously came up with some chorus lines for the next one:

  • "I'll unseat your helpmeet with Peartiser 'n Kopparberg..."
  • "Cheapest ostrich burger inside the M25!"
  • "Ruby Wax has an MA in Mindfulness! Well that's okay then!"
  • "...and I wish that no-one else ever gets any wishes."


*************************************************************


Quarreled with a humanist friend who upheld some of the old good stereotypes of AI villains: computers with explosive cognitive dissonance, beaten by the fundamental gaps in logic itself, or the fundamental gulf between logic and the world, or by getting stuck in perverse optimisation. Thus human superiority is regained: it lies in our 'paradox-absorbing crumple zones'! (In humans these are located all over the head.)

But... default logic exists, and floating-point numbers do very well at modelling the arbitrarily precise without preventing serial computations, and we already have agents which satisfice (although). Fine it is probably impossible to make a functioning computer unfazed by logical errors; however, an intelligence running on a computer is not necessarily a computer (just as I am not exactly my brain).

Real AI villains could be more terrifying still: they might eradicate us without hatred or even any active intent. The nightmare is being destroyed because beneath notice, irrelevant to the quest to make paperclip all that is.


*************************************************************


What does programming do to programmers?

Not many people are trying to find out, if this lacklustre thing is an accurate summary of the field. Well, computer science is what Taleb calls ludic, an artificially understandable and predictable domain.* So there's definitely a sense of power involved. You solve a hundred tiny problems a day, make consequential decisions that get instant feedback, and you operationalise the abstract so to better paw at it.

But it is also banal. In isolation each line of code is laughably simple - stripped of all hauteur and connotation and ambiguity. To force oneself to think like this, 8 hours a day... Breaking up one's thoughts like this could make for unripe thought; what if working at low levels made it harder to skip up to high levels. (In CS, "high-level" means "closer to human language", roughly. Ha.)

Paranoid null hypothesis: Speaking to machines comes at the expense of speaking to humans.

Luckily, and whatever the stereotype wants you to picture, this doesn't happen reliably: clear thought is clear thought in whatever language, and reductionism has magnificence in. The cosmically important lesson of the intellectual toy Conway's Game of Life is that sophistication and uniqueness result from a tiny number of wholly mechanical elements. (And how!)

* Though obv it floats in the broken world - consider things like the Intel FDIV bug.


************************************************************


I had an idea about the durability of knowledge, an idea which has been covered better under 'the half-life of facts'. Knowledge* can be fragile in a few ways:

  1. Because of error: that is, the field is immature and produces bad predictions. (Conversely, theories that cover enough of the domain to defang or incorporate anomalies are durable). (Epistemic fragility)

  2. Because the token of the knowledge, the psychological instantiation of the fact, is hard to retain. (Psychological fragility).

  3. Because the relevant part of the world actually changes its behaviour. (Ontological fragility)

(Type 2 is just one's memory width meeting abstraction meeting complexity, plainly. But it goes a bit deeper - consider the way that Alzheimereans retain musical ability well into their loss of self.)

Type 3 is a defining feature of social systems: basically anything involving people will have this recursive ontological instability ("Hey, they're treating us like animals! Let's shit on their floor!"). It is the other reason that 'hard' social science is impossible (the first being that you can rarely experiment properly and so can never really unpick causal variables).

Why care about this, philosophically? Well, because as long as you have an idea of your phenomenon's ontological durability, type (1) is a pretty good measure of a field's maturity!


* Yes, yes, I'm equivocating: I mean here "what is taken to be knowledge, the best unrefuted explanation" rather than "absolutely true correspondence between mind and reality".



Comments