Wednesday, January 31, 2018

Machine Learning and Ideology

Article: AI BIAS DOESN'T MEAN WHAT JOURNALISTS SAY IT WILL MEAN

The current political conversation over machine learning apps is so damn depressing.

(Also related: https://discoursedrome.tumblr.com/post/167714252325/joseph-heath-how-to-beat-racism )

Depressing, as in a missed opportunity for what could be amazing.

From rationalists, to social justice types, to every strain of thinker about our conflct-happy culture, we talk about “bias” a lot. Maybe it’s tribal bias to “like your ingroup” or maybe it’s structural bias that you need to cure yourself of, whatever. Is it the way our brain is built, our genes, what books we read as kids, our original sin of whiteness?

Zizek (and some other critical theorists) instead say the bias isn’t anything inherent in us. “Ideology is our spontaneous reaction to the world.” You drop a perfectly rational creature in a situation, it looks around, collects some data, then generates conclusions and a narrative based on that data.

A Martian looks around and sees many of the poor people are lazy, and the rich people are energetic. They conclude “wealth must be the result of how hard you work.”

With greater knowledge, we see the errors in this. Poor people may have learned helplessness, while the rich people perform useless work in order to feel meritorious. Or one culture literally conquered and stole stuff from the other, and just coincidentally it matches up to our lazy/energetic divide. Or our definitions of lazy and hard working and rich and poor are hopelessly loose and impossible to scientifically verify.

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.

Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.

So that’s the debate: is bias something we bring to the table, or a spontaneous result of rational observation of limited and complicated data environments?

Which makes the thing about machine learning controversies fascinating! We write some programs whose job it is to collect the data and give us conclusions… and it turns out they’re racist! They copy insults other users of social media are using, the tell you to stay away from minority neighborhoods when you walk home, rather than pure electronic angels, these bots fall into all our worst, lowest common denominator impulses. How could this fail to be interesting?

Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.

No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”

*****

 dedicating-ruckus

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.
Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.
Okay, but this is… like, false.
If you put someone with no preconceptions or relevant entanglements into a situation and let them come to conclusions, the conclusions more or less by definition cannot be “biased” in the common political sense. The alien comes and sees that many rich people are hard-working, and many poor people are lazy, and they come to the conclusion that wealth is related to how hard you work. And it’s correct.
(And of course it’s more complicated than that, and a full investigation of “what factors lead to wealth” will come out with a whole laundry list of factors including genetics, various character attributes, general social situation and pure contingent luck. But “hard work” will in fact be prominent in this list of factors. The more detailed analysis filled in and detailed the naive pattern-matching, it didn’t invalidate it.)
“All humans are equal” is true in a particular moral sense, but “all humans have an equal capacity to acquire wealth” is just sky-is-green obviously wrong.
Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.
No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”
In other words, “coming to the best possible conclusions given the data available to you is unethical”.
So if the disinterested pursuit of truth leads you to conclusions that violate certain previously received and unquestionable moral axioms, you must seek more information and overcome your biases until you can get yourself to conclusions that match with what you previously thought. Never mind if the extra information you seek continues to lead you down paths of wrongthink. Continue investigating until the answer is acceptable!
Also, make sure you walk through bad neighborhoods on your commute home, otherwise it would be racist!

*****

If our neutral alien were to land in early 1800′s America and look at black versus white people, they would probably conclude that black people had dramatically less intellectual capacity than white people did. Because they really weren’t as educated by most metrics.

Of course it was because they were denied education (and otherwise punished for being too smart.)

Look at this like a proper Bayesian: your priors are important. You can for instance start with “brains of roughly the same size and structure are probably roughly similar in outputs”, and even though you see a lot of examples of black people being able to read less than white people, you can hold firm to that prior until you’ve untangled the confounding nurture variables.

I would not stick to priors I had reason to believe are wrong. Just my prior of “most-to-all people are capable of complex thought, emotions, and motives” overrides almost every statistical inference I would make if I were a naive frequentist.

There’s nothing wrong with saying the current state of some people are such that they commit more violence or can read less. In fact, denying that is the sort of liberal idealism that refuses to face the ugly degradation of poverty, how being low class DOES make you worse off than others. But this is very different from the essentialism of saying “all people of that race are like that, and I don’t have to look at any individual case or check for other reasons they’ve become like that.”

Tuesday, January 16, 2018

Three Types of Economies

We discuss capitalism, communism, and anarchy a lot, which are both modes of society, but also references to types of economic exchange. The critique of society often gets confused for the economy type, even though these are rather separate things. So for the sake of simplicity and clarifying, let’s talk JUST about the type of economy today.

The thing people forget about the economy - because we get caught up in accusations of selfishness and assumptions of power - is that it is at heart an information problem, or rather, a method of solving this problem. We have some Stuff, it is distributed among different people, with different degrees of making it or acquiring it, and we would like it to be distributed to the people who need and want it the most, or can make the most productive use of it. Even if we solve every human sin regarding selfishness and power, we will still need to solve the problem of “where does stuff go.”

(Most people are in fact good, and willing to work for the overall good if there’s a solid plan - but just being altruistic isn’t enough to figure out how to feed the hungry.)

There are three different types of economies based on how much information you have. The information is in the form of “what resources do you have? How hard is it to extract them? What do you want? How much of it do you want?” You generally know all of this for yourself, and only some of this information for other people. Ie, you know how hungry you are and how much food from McDonald’s you want, but you are less likely to know how hungry your friend is, or what they are in the mood for.

***

Trust Economy - this is the type of economy where you have TOTAL INFORMATION about everyone (relevant to the problem in question.) Picture a family who has known each other for a long time, sitting down for a meal with food closer to some than others. They know who likes what dish, who needs more food, and they can see what is already on each other’s plates. They can quickly just give each other the food other people want, and judge what a fair allotment is weighed for how important things are to each people there.

We don’t think about this much, as it’s the behavior we do instinctually when the problem is so small we don’t even see an economy is going on. But whenever you are with a group of people who get along, and you’re distributing a limited resource (a turn at the videogame, snacks, conversational space) in a manner that’s trying to be considerate of everyone but is so easy you don’t even think about it -- that’s trust economy.

This is most associated with anarchy. Who needs rulers when people can just help themselves and each other?

Command Economy - This is for situations where a MODERATE amount of information is known by some parties. This means the capabilities of people, the total resources at play, the rough size of demands people need met, etc. But not everything, like “how will widget A fit into device B, who is sick today” or just because one person has a lot of information that does not mean everyone is informed. Most people’s workplaces - be they corporations or the government (especially the military) operate like this. There is one central unit who establishes the group goals and directs where resources go and who should be in which departments… and then lets the individual departments figure out their own structure and how they will use those resources, and those departments will usually just give their managers targets, and let the managers command employees to figure out how best to meet those targets.

(Or it’s just when Dad orders for the whole family at the restaurant.)

The advantages of this are great - you can focus everyone on an important goal, and just because one person happens to start with a lot of resources, those can still be driven to where they are needed the most. The disadvantages are obvious - the bottle-neck of any centralization, where if the leader is dumb or selfish or otherwise inefficient, they screw up the entire economy around them (and insulated leaders tend not to be responsive to changing conditions.)

This is associated with communism and war-time economies, though somewhat unfairly, as a communism will often have sub-markets for areas under it where the government lacks information -- just like as capitalist countries always have various command economies under them, within the corporations that make up the economy.

Market Economy - This is for situations for people share NO INFORMATION with each other, and yet you need to figure out how to disperse stuff. People just announce “I will trade X for Y”, and other people who themselves value Y over X can take that, making both better off, without having to know how important or plentiful X and Y were for any people other than themselves.

This is associated with capitalism, though as said above, you find markets in any type of country, and capitalist countries contain many different types of economies within them. Capitalist ideology just focuses on market economies as the type of ideal to attain.

They are kind of magic, because market economies can make large decentralized systems act incredibly efficiently without any central information processing. Economists are in love with this miracle, and it can indeed be pretty great.

But the magic is that they process all this in the adverse situation of limited information. It’s very impressive that they can do that, but that does not make them always the best. They are a good solution to the problem of lacking information, but if you have information, then they might just be inefficient. Resources might be squandered because the person who has them no longer has any reason to trade them away, etc.

***

The point is that none of these economies are universally better than others. They are optimized responses to different situations, and should be used as such. Using a market economy when you have enough information for a trust economy creates a dreadful waste of time and effort as every exchange is negotiated and verified. Using a trust economy when people don’t know anything about other people, will just lead to mismanagement disaster.

So when discussing capitalism, communism, and anarchy (as social models) we understandably make reference to the economy types we associate with them, but each of those ideologies are more about exalting one particular tool as the social ideal, even as it makes use of all three. We should instead take the detached view of tools, not looking at them as moral imperatives, but useful responses to different situations.