21/11/2016

Notes from Effective Altruism "Global" "x" in Oxford, in 2016


(This is about this thing. The following would work better as a bunch of tweets but seriously screw that: )


###########################################################################################################


Single lines which do much of the work of a whole talk:

"Effective altruism is to the pursuit of the good as science is to the pursuit of the truth." (Toby Ord)

"If the richest gave just the interest on their wealth for a year they could double the income of the poorest billion." (Will MacAskill)

"If you use a computer the size of the sun to beat a human at chess, either you are confused about programming or chess." (Nate Soares)

"Evolution optimised very, very hard for one goal - genetic fitness - and produced an AGI with a very different goal: roughly, fun." (Nate Soares)

"The goodness of outcomes cannot depend on other possible outcomes. You're thinking of optimality." (Derek Parfit)



###########################################################################################################


Owen Cotton-Barratt formally restated the key EA idea: that importance has a highly heavy-tailed distribution. This is a generalisation from the GiveWell/OpenPhil research programme, which dismisses (ahem, "fails to recommend") almost everyone because a handful of organisations are thousands of times more efficient at harvesting importance (in the form of unmalarial children or untortured pigs or an unended world).

Then, Sandberg's big talk on power laws generalised on Cotton-Barratt's, by claiming to find the mechanism which generates that importance distribution (roughly: "many morally important things in the world, from disease to natural disasters to info breaches to democides all fall under a single power-law-outputting distribution").

Cotton-Barratt then formalised the "Impact-Tractability-Neglectedness" model, as a precursor to a full quantitative model of cause prioritisation.



Then, Stefan Schubert's talk on the younger-sibling fallacy attempted to extend said ITN model with a fourth key factor: awareness of likely herding behaviour and market distortions (or "diachronic reflexivity").

There will come a time - probably now tbf - when the ITN model will have to split in two: into one rigorous model with nonlinearities and market dynamism, and a heuristic version. (The latter won't need to foreground dynamical concerns unless you are 1) incredibly influential or 2) incredibly influenceable in the same direction as everyone else. Contrarianism ftw.)


###########################################################################################################


Catherine Rhodes' biorisk talk made me update in the worst direction: I came away convinced that biorisk is both extremely neglected and extremely intractable to anyone outside the international bureaucracy / national security / life sciences clique. Also that "we have no surge capacity in healthcare. The NHS runs at "98%" of max on an ordinary day."

(This harsh blow was mollified a bit by news of Microsoft's mosquito-hunting drones - used for cheap and large-sample disease monitoring, that is, not personalised justice.)


###########################################################################################################




Anders Sandberg contributed to six events, sprinkling the whole thing with his hyper-literate, uncliched themes. People persisted in asking him things on the order of "whether GTA characters are morally relevant yet". But even these he handled with his rigorous levity.

My favourite was his take on the possible expanded value space of later humans: "chimps like bananas and sex. Humans like bananas, and sex, and philosophy and competitive sport. There is a part of value space completely invisible to the chimp. So it is likely that there is this other thing, which is like whoooaa to the posthuman, but which we do not see the value in."


###########################################################################################################


Books usually say that "modern aid" started in '49, when Truman announced a secular international development programme. Really liked Alena Stern's rebuke to this, pointing out that the field didn't even try to be scientific until the mid-90s, and did a correspondly low amount of good, health aside. It didn't deserve the word, and mostly still doesn't.


###########################################################################################################


Nate Soares is an excellent public communicator: he broadcasts seriousness without pretension, strong weird claims without arrogance. A catch.


###########################################################################################################


What is the comparative advantage of us 2016 people, relative to future do-gooders?

  • Anything happening soon. (AI risk)
  • Anything with a positive multiplier. (schistosomiasis, malaria, cause-building)
  • Anything that is hurting now. (meat industry)


###########################################################################################################


Dinner with Wiblin. My partner noted that I looked a bit flushed. I mean, I was eating jalfrezi.


###########################################################################################################


Most every session I attended had the same desultory question asked: "how might this affect inequality?" (AI, human augmentation, ) The answer's always the same: if it can be automated and mass-produced with the usual industrial speed, it won't. If it can't, it will.

It was good to ask (and ask, and ask) this for an ulterior reason though, see the following:


###########################################################################################################


Molly Crockett's research - how a majority of people* might relatively dislike utilitarians - was great and sad. Concrete proposals though: people distrust people who don't appear morally conflicted, who use physical harm for greater good, or more generally who use people as a means. So express confusion and regret, support autonomy whenever the harms aren't too massive to ignore, and put extra effort into maintaining relationships.

These are pretty superficial. Which is good news: we can still do the right thing (and profess the right thing), we just have to present it better.

(That said, the observed effects on trust weren't that large: about 20%, stable across various measures of trust.)



* She calls them deontologists, but that's a slander on Kantians: really, most people are just sentimentalists, in the popular and the technical sense.



###########################################################################################################


Not sure I've ever experienced this high a level of background understanding in a large group. Deep context - years of realisations - mutually taken for granted; and so many shortcuts and quicksteps to the frontier of common knowledge. In none of these rooms was I remotely the smartest person. An incredible feeling: you want to start lifting much heavier things as soon as possible.


###########################################################################################################



Very big difference between Parfit's talk and basically all the others. This led to a sadly fruitless Q&A, people talking past each other by bad choice of examples. Still riveting: emphatic and authoritative though hunched over with age. A wonderful performance with an air of the Last of His Kind.

Parfit handled 'the nonidentity problem' (how can we explain the wrongness of situations involving merely potential people? Why is it bad for a species to cease procreating?) and 'the triviality problem' (how exactly do tiny harms committed by a huge aggregate of people combine to form wrongness? Why is it wrong to discount one's own carbon emissions when considering the misery of future lives?).

He proceeded in the (lC20th) classic mode: state clean principles that summarise an opposing view, and then find devastating counterexamples to them. All well and good as far as it goes. But the new principles he sets upon the rubble - unpublished so far - are sure to have their own counterexamples in production by the grad mill.


The audience struggled through the fairly short deductive chains, possibly just out of unfamiliarity with philosophy's unlikely apodicticity. They couldn't parse it fast enough to answer a yes/no poll at the end. ("Are you convinced of the non-difference view?")

The Q&A questions all had a good core, but none hit home for various reasons:

  • "Does your theory imply that it is acceptable to torture one person to prevent a billion people getting a speck in their eye?" Parfit didn't bite, simply noting, correctly, that 1) Dostoevsky said this in a more manipulative way, and 2) it is irrelevant to the Triviality Problem as he stated it. (This rebuffing did not appear to be a clever PR decision, though it was, since he is indeed a totalarian.)

  • "What implications does this have for software design?" Initial response was just a frowning stare. (Sandberg meant: lost time is clearly a harm; thus the designers of mass-market products are responsible for thousands of years of life when they fail to optimise away even 1 second delays.)

  • "I'd rather give one person a year of life than a million people one second. Isn't continuity important in experiencing value?" This person's point was that Parfit was assuming the linearity of marginal life without justification, but this good point got lost in the forum. Parfit replied simply - as if the questioner was making a simple mistake: "These things add up". I disagree with the questioner about any such extreme nonlinearity - they may be allowing the narrative salience of a single life to distract them from the sheer scale of the number of recipients in the other case - but it's certainly worth asking.


We owe Parfit a lot. His emphasis on total impartiality, the counterintuitive additivity of the good, and most of all his attempted cleaving of old, fossilised disagreements to get to the co-operative core of diverse viewpoints: all of these shine throughout EA. I don't know if that's coincidental rather than formative debt.

(Other bits are not core to EA but are still indispensable for anyone trying to be a consistent, non-repugnant consequentialist: e.g. thinking in terms of degrees of personhood, and what he calls "lexical superiority" for some reason (it is two-level consequentialism).)

The discourse has definitely diverged from non-probabilistic apriorism, also known as the Great Conversation. Sandberg is of the new kind of philosopher: a scientific mind, procuring probabilities, but also unable to restrain creativity/speculation because of the heavy, heavy tails here and just around the corner.



17/11/2016

data jobs, tautologies, bullshit, $$$


(c) Tom Gauld (2014)
When physicists do mathematics, they don’t say they’re doing “number science”. They’re doing math. If you’re analyzing data, you’re doing statistics. You can call it data science or informatics or analytics or whatever, but it’s still statistics... You may not like what some statisticians do. You may feel they don’t share your values. They may embarrass you. But that shouldn’t lead us to abandon the term “statistics”.

Karl Broman


what makes data science special and distinct from statistics is that this data product gets incorporated back into the real world, and users interact with that product, and that generates more data: a feedback loop. This is very different from predicting the weather...

– Cathy O'Neil / Rachel Schutt


"Data science" is the latest name for an old pursuit: the attempt to make computers give us new knowledge. * In computing's short history, there have already been about 10 words for this activity (and god knows how many derived job titles). So: here's an anti-bullshit exercise, a genealogy of some very expensive buzzwords.

The following are ordered by the year the hype peaked (as estimated by maximum mentions in books). You can play with proper data here.




  • "Expert systems"
    The original, GOFAI craft. Painstaking, manually-built rule stacks. 69% accuracy in certain medical tasks, which beat out human experts.


  • "Business intelligence"
    The most transparently hokum. I include the rigorous but dead-ended world of MDX and OLAP in here, perhaps unfairly: they're certainly still in use by some organisations who you'd expect to know better.


  • "Data mining". Originally a pejorative, among actual statisticians, meaning "looking for fake patterns to proclaim". Now reclaimed in industry and academia.** Compared to ML, data mining has a lot of corporate dilution, proprietary gremlins and C20th crannies in, from what I can tell. (Basically the same as "Knowledge discovery"?)


  • "Predictive analytics". See machine learning but subtract most.


  • "Big data". Somewhat meaningful as a concept, extremely tangible as an engineering challenge, and tied to genuinely new results. But still highly repugnant. Has captured much of the present job market, but the hype train has headed off well and truly.


  • "Machine learning". Applied statistics, but recast by computer scientists into algorithms. Goal: Getting systems that work fast rather than inferring the calibrated convergent truth. Along with stats, ML is the heart of the actual single phenomenon underlying all this money and hype.


  • "Data science". Recent high-profile successes in the AI/ML/DS space are largely due to the data explosion - not to new approaches or smarter protagonists. So this is at least half job title inflation. Still, it is handy to have a job title with enough elbow-room to be statistician and developer and machine teacher at once.


You might have hoped that nominally scientific minds would shun the proliferation of tautologous or meaningless terms. But stronger pressures prevail - chiefly the need of job security, via bamboozling clients or upper management or tech conference attendees.



##################################################################################################

* As always, we settle for optimal guesses instead of 'knowledge'.

If I'd said "the attempt to get knowledge from data" then of course I would just be describing statistics.^ This near miss doesn't bother me - despite the fact that statisticians computerised before any other nonengineering profession or field - and despite their building much of the theory and even implementations described in this piece (besides expert systems and GOFAI). Their gigantic century of work is a superset of what I'm talking about.
^ Obviously my initial definition is pretty close to "narrow artificial intelligence" too: at the limit, AI is "building a system for automatically getting knowledge from arbitrary input". Many of the successes described above also belong to them (particularly expert systems and GOFAI). "Data jobs", as I blandly put it in the header, are "jobs dealing with the fact that we don't quite have AI". There are a lot of terrible data jobs, and I'm not talking about them either. The full specification, if you bloody insist, is: "cultures, largely applied or industrial ones, which use cool data processing methods which are not really A.I. in the wide or strong sense, but which aren't standard 70s drone analyst work either. Nor have they anything to do with the very similar work of information physicists or electronic engineers or anything."

(But then all work in applied maths and stats shares a lot, since it's all based on the same world using the same concepts and logics. Only the goals and technologies really vary.)

I'm speaking as generally as I am - that is, almost speaking nonsense - so I can cut through the mire of terms, the effluent of the academic-industrial complex. In intellectual terms, it is pretty easy to refer to all the things I am trying to refer to: they are 'the formal sciences'. But I'm trying to tease out the practitioners, and the way-downstream economics.


** I had been dismissing "data mining" as just a 90s business way of saying "machine learning", but the distinction is actually fairly well-defined:
Data mining: Direct algorithm design for already well-defined goals - where you know what features to use. (e.g. "What kind of language do CVs use?")

Machine learning: Indirect algorithm design, via automated feature engineering, for a ill-defined goal. (e.g. "How do we distinguish a picture of a cat from a picture of a far away lynx ?")



notable oral noises


  • Strine (Oz proper n.): that thick Australian accent. Onomatopoeic: just say "Striiine" - "(Au)stralian" - with a long ɒ sound.

  • curioso (C17th It. n.): Brilliant enthusiast of unusual things. Originally synonymous with virtuoso; a word for a proto-scientist / Renaissance man.

  • sockdolager (American n.): A finisher; an exceptional thing. Probably from "sock" (punch) and "doxology" (final hymn). Was the last word heard in the theatre before Lincoln was shot amidst laughter.

  • gunsel (originally Yiddish n.): 1) hoodlum; Player. 2) catamite - from the Yiddish גענדזל, gosling. <3. The derived term "gunselism" has exactly 1 hit and how often do you see that?

  • green ink letter (n.): A lunatic rant sent in to the Letters page.

  • cromulent (adj.): blameless; fine. Made up by a Simpsons writer to demonstrate Frege's Context Principle (or Springfield's inbreeding).

  • Taco Bell Programming (n.): the discipline of solving software engineering problems (many or most) with sequences of calls to classic, 'small, sharp' Unix tools. ("The name comes from the fact that every item on the menu at Taco Bell, a company which generates almost $2 billion in revenue annually, is simply a different configuration of roughly eight ingredients.")

  • umquhile (Scots adj.): quondam; former.

  • dark pattern (n.): In designing things, an intentionally unhelpful choice that tricks the user into doing something they don't really want to. Linkedin is a notable case: it is very hard to notice that your entire address book is being mined and emails sent out on your behalf. (So is spear phishing.) First seen in this oddly chilling sentence: "LinkedIn isn’t the only social network that uses dark patterns to grow their social graph".

  • dark tourism (n.): Visiting e.g. Auschwitz or Ypres or Ground Zero. Nominally for remembrance but generally for voyeurism. (Or sorry what's the difference?)

  • pull quote (n.): a single line quoted from a review for advertising. Easily / usually involves quote doctoring, the shameless positive spinning of negative sentiments.

  • contextomy (academic n.): quotation for misrepresenting someone. As in vasectomy: cut up.

  • mpreg (internet adj.): Male pregnancy. A genre of fanfiction centred around said zany conceit.

  • the three functions: that is, 'warrior, priest, peasant'. Categories in a particular social theory that is supposed to unify all the proto-Indo-European cultures: they all had these three public spheres and three classes, maybe. I mean, it would be hard-pressed for this to be false, it's so vague. But we know nothing more specific.

  • for the birds (American adj.): Trivial; for silly people.




05/11/2016

feel for data

"This isn't right. Imagine: we give them a loss function, without a utility function. They can't feel good; only less bad."
"It's the same with us, tho. What we call utility is just the absence of loss."
"I'm not sure that's true. Pride feels to be more than the absence of shame; love is more than absence of loneliness."
"There's a fairly big gap between your two examples. And it's hard to think clearly when strong pleasure or pain is implicated."
"Nevertheless, yours is the view requiring a mass redefinition of natural language to make two entities become one."
"I don't mind. Even if they're not identical, we can still capture most of all value by reducing harm."
"I don't see how you can know that."
"Obvs I don't know it infallibly, but anyway it can't hurt."
"You might be more ambitious than such moral hedging."
"Yes, as soon as possible: that is, much, much later. People are dying."
"They are, and not just the ones which have our shape. Maybe not just the damp ones. Is a reinforcement learner negligible?"
"So my actions tell me."
"Not very revisionary."
"There will soon be objective ways to tell if I'm speciesist or substratist. I'll keep researching."
"But you're against destructive
animal testing."
"We know the value of nonhuman experimentation, and it is often simply not enough for the known torture caused. At present, the potential value of
in silico vivisection is not so bounded."
"Hope you sleep tonight."
"I will."



04/11/2016

Highlighted passages from Ronson's So You've Been Publicly Shamed


Something of real consequence was happening. We were at the start of a great renaissance of public shaming. After a lull of almost 180 years (public punishments were phased out in 1837 in the United Kingdom and in 1839 in the United States), it was back in a big way. When we deployed shame, we were utilizing an immensely powerful tool. It was coercive, borderless, and increasing in speed and influence. Hierarchies were being leveled out. The silenced were getting a voice. It was like the democratization of justice. And so I made a decision. The next time a great modern shaming unfolded against some significant wrongdoer—the next time citizen justice prevailed in a dramatic and righteous way—I would leap into the middle of it. I’d investigate it close up and chronicle how efficient it was in righting wrongs.



After the interview was over, I staggered out into the London afternoon. I dreaded uploading the footage onto YouTube because I’d been so screechy. I steeled myself for comments mocking my screechiness and I posted it. I left it up for ten minutes. Then, with apprehension, I had a look.

      “This is identity theft,” read the first comment I saw. “They should respect Jon’s personal liberty.”
      Wow, I thought, cautiously.
      “Somebody should make alternate Twitter accounts of all of those ass clowns and constantly post about their strong desire for child porn,” read the next comment.
      I grinned.
      “These people are manipulative assholes,” read the third. “Fuck them. Sue them, break them, destroy them. If I could see these people face to face I would say they are fucking pricks.”
      I was giddy with joy. I was Braveheart, striding through a field, at first alone, and then it becomes clear that hundreds are marching behind me.
      “Vile, disturbing idiots playing with someone else’s life and then laughing at the victim’s hurt and anger,” read the next comment.
      I nodded soberly.
      “Utter hateful arseholes,” read the next. “These fucked up academics deserve to die painfully. The cunt in the middle is a fucking psychopath.”
      I frowned slightly. I hope nobody’s going to actually hurt them, I thought.
      “Gas the cunts. Especially middle cunt. And especially left-side bald cunt. And especially quiet cunt. Then piss on their corpses,” read the next comment.



The common assumption is that public punishments died out in the new great metropolises because they’d been judged useless: Everyone was too busy being industrious to bother to trail some transgressor through the city crowds like a volunteer scarlet letter. But according to the documents I found, that wasn’t it at all. They didn’t fizzle out because they were ineffective. They were stopped because they were far too brutal...



Someone's response to somebody making a joke about dongles nearby (one not directed at her):
    "You felt fear?" I asked.
    "Danger." she said. "Clearly my body was telling me, 'You are unsafe'."

Which was why, she said, she “slowly stood up, rotated from my hips, and took three photos.” She tweeted one, “with a very brief summary of what they said. Then I sent another tweet describing my location. Right? And then the third tweet was the
[conference's] code of conduct.”

    “You talked about danger," I said. "What were you imagining might...?"
    “Have you ever heard that thing, men are afraid that women will laugh at them and women are afraid that men will kill them?” she said.

I told Adria that people might consider that an overblown thing to say. She had, after all, been in the middle of a tech conference with 800 bystanders.

    “Sure,” Adria replied. “And those people would probably be white and they would probably be male.”

This seemed a weak gambit. There is some Latin for this kind of logical fallacy. It’s called an ad hominem attack. When someone can’t defend a criticism against them, they change the subject by attacking the criticiser.

    “Somebody getting fired is pretty bad,” I said. “I know you didn’t call for him to be fired. But you must have felt pretty bad.”

“Not too bad,” she said. She thought more and shook her head decisively. “He’s a white male. I’m a black Jewish female. He was saying things that could be inferred as offensive to me, sitting in front of him... I’ve seen things where people are like, ‘Adria didn’t know what she was doing by tweeting it.’ Yes, I did. Hank’s actions resulted in him getting fired, yet he framed it in a way to blame me. If I had two kids, I wouldn’t tell ‘jokes’”


I am mostly just amazed by how stupid she is. After suffering the worst of the phenomenon, she still thinks shaming is great - still sees herself as an agent of justice: "If I had a spouse and two kids to support I certainly would not be telling ‘jokes’ like he was doing at a conference. Oh but wait, I have compassion, empathy, morals and ethics to guide my daily life choices."


...our imagination is so limited, our arsenal of potential responses so narrow, that the only thing anyone can think to do with an inappropriate shamer like Adria is to punish her with a shaming. All of the shamers had themselves come from a place of shame, and it really felt parochial and self-defeating to instinctively slap shame onto shame like a clumsy builder covering cracks.



If it had previously existed in
[the shamed person’s] bosom a spark of self-respect this exposure to public shame utterly extinguishes it. Without the hope that springs eternal in the human breast, without some desire to reform and become a good citizen, and the feeling that such a thing is possible, no criminal can ever return to honorable courses. The boy of eighteen who is whipped at New Castle [a Delaware whipping post] for larceny is in nine cases out of ten ruined. With his self-respect destroyed and the taunt and sneer of public disgrace branded upon his forehead, he feels himself lost and abandoned by his fellows



in our line of work the more humiliated a person is, the more viral the story tends to go. Shame can factor large in the life of a journalist — the personal avoidance of it and the professional bestowing of it onto others.



Contains one highly original and poignant thought experiment, via a human rights lawyer, Clive Stafford Smith:
“Let me ask you three questions,” he said. “And then you’ll see it my way. Question One: What’s the worst thing that you have ever done to someone? It’s okay. You don’t have to confess it out loud. Question Two: What’s the worst criminal act that has ever been committed against you? Question Three: Which of the two was the most damaging for the victim?”

The worst criminal act that has ever been committed against me was burglary. How damaging was it? Hardly damaging at all. I felt theoretically violated at the idea of a stranger wandering through my house. But I got the insurance money. I was mugged one time. I was eighteen. The man who mugged me was an alcoholic. He saw me coming out of a supermarket. “Give me your alcohol,” he yelled. He punched me in the face, grabbed my groceries, and ran away. There wasn’t any alcohol in my bag. I was upset for a few weeks, but it passed.

And what was the worst thing I had ever done to someone? It was a terrible thing. It was devastating for them. It wasn’t against the law.

Clive’s point was that the criminal justice system is supposed to repair harm, but most prisoners — young, black — have been incarcerated for acts far less emotionally damaging than the injuries we noncriminals perpetrate upon one another all the time — bad husbands, bad wives, ruthless bosses, bullies, bankers.


“The justice system in the West has a lot of problems,” Poe said, “but at least there are rules. You have basic rights as the accused. You have your day in court. You don’t have any rights when you’re accused on the Internet. And the consequences are worse. It’s worldwide forever.”



I, personally, no longer take part in the ecstatic public condemnation of people unless they’ve committed a transgression that has an actual victim... I miss the fun a little. But it feels like when I became a vegetarian. I missed the steak, although not as much as I’d anticipated, but I could no longer ignore the slaughterhouse...

I favour humans over ideology, but right now the ideologues are winning, and they're creating a stage for constant artificial high dramas, where everyone is either a magnificent hero or a sickening villain. We can lead good, ethical lives, but some bad phraseology in a Tweet can overwhelm it all - even though we know that's not how we should define our fellow humans. What's true about our fellow humans is that we are clever and stupid. We are grey areas.

...when you see an unfair or an ambiguous shaming unfold, speak up on behalf of the shamed person. A babble of opposing voices - that's democracy. The great thing about social media was how it gave a voice to voiceless people. Let's not turn it into a world where the smartest way to survive is to go back to being voiceless.