Wednesday, January 31, 2018

Machine Learning and Ideology

Article: AI BIAS DOESN'T MEAN WHAT JOURNALISTS SAY IT WILL MEAN

The current political conversation over machine learning apps is so damn depressing.

(Also related: https://discoursedrome.tumblr.com/post/167714252325/joseph-heath-how-to-beat-racism )

Depressing, as in a missed opportunity for what could be amazing.

From rationalists, to social justice types, to every strain of thinker about our conflct-happy culture, we talk about “bias” a lot. Maybe it’s tribal bias to “like your ingroup” or maybe it’s structural bias that you need to cure yourself of, whatever. Is it the way our brain is built, our genes, what books we read as kids, our original sin of whiteness?

Zizek (and some other critical theorists) instead say the bias isn’t anything inherent in us. “Ideology is our spontaneous reaction to the world.” You drop a perfectly rational creature in a situation, it looks around, collects some data, then generates conclusions and a narrative based on that data.

A Martian looks around and sees many of the poor people are lazy, and the rich people are energetic. They conclude “wealth must be the result of how hard you work.”

With greater knowledge, we see the errors in this. Poor people may have learned helplessness, while the rich people perform useless work in order to feel meritorious. Or one culture literally conquered and stole stuff from the other, and just coincidentally it matches up to our lazy/energetic divide. Or our definitions of lazy and hard working and rich and poor are hopelessly loose and impossible to scientifically verify.

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.

Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.

So that’s the debate: is bias something we bring to the table, or a spontaneous result of rational observation of limited and complicated data environments?

Which makes the thing about machine learning controversies fascinating! We write some programs whose job it is to collect the data and give us conclusions… and it turns out they’re racist! They copy insults other users of social media are using, the tell you to stay away from minority neighborhoods when you walk home, rather than pure electronic angels, these bots fall into all our worst, lowest common denominator impulses. How could this fail to be interesting?

Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.

No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”

*****

 dedicating-ruckus

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.
Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.
Okay, but this is… like, false.
If you put someone with no preconceptions or relevant entanglements into a situation and let them come to conclusions, the conclusions more or less by definition cannot be “biased” in the common political sense. The alien comes and sees that many rich people are hard-working, and many poor people are lazy, and they come to the conclusion that wealth is related to how hard you work. And it’s correct.
(And of course it’s more complicated than that, and a full investigation of “what factors lead to wealth” will come out with a whole laundry list of factors including genetics, various character attributes, general social situation and pure contingent luck. But “hard work” will in fact be prominent in this list of factors. The more detailed analysis filled in and detailed the naive pattern-matching, it didn’t invalidate it.)
“All humans are equal” is true in a particular moral sense, but “all humans have an equal capacity to acquire wealth” is just sky-is-green obviously wrong.
Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.
No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”
In other words, “coming to the best possible conclusions given the data available to you is unethical”.
So if the disinterested pursuit of truth leads you to conclusions that violate certain previously received and unquestionable moral axioms, you must seek more information and overcome your biases until you can get yourself to conclusions that match with what you previously thought. Never mind if the extra information you seek continues to lead you down paths of wrongthink. Continue investigating until the answer is acceptable!
Also, make sure you walk through bad neighborhoods on your commute home, otherwise it would be racist!

*****

If our neutral alien were to land in early 1800′s America and look at black versus white people, they would probably conclude that black people had dramatically less intellectual capacity than white people did. Because they really weren’t as educated by most metrics.

Of course it was because they were denied education (and otherwise punished for being too smart.)

Look at this like a proper Bayesian: your priors are important. You can for instance start with “brains of roughly the same size and structure are probably roughly similar in outputs”, and even though you see a lot of examples of black people being able to read less than white people, you can hold firm to that prior until you’ve untangled the confounding nurture variables.

I would not stick to priors I had reason to believe are wrong. Just my prior of “most-to-all people are capable of complex thought, emotions, and motives” overrides almost every statistical inference I would make if I were a naive frequentist.

There’s nothing wrong with saying the current state of some people are such that they commit more violence or can read less. In fact, denying that is the sort of liberal idealism that refuses to face the ugly degradation of poverty, how being low class DOES make you worse off than others. But this is very different from the essentialism of saying “all people of that race are like that, and I don’t have to look at any individual case or check for other reasons they’ve become like that.”

No comments:

Post a Comment