Tuesday, September 25, 2018

The Gatekeeper’s Dilemma

I have a friend who is running a mix swap, where people exchange playlists they created. He wants this to be as open and friendly to newcomers and easily intimidated people as possible. He has two things to monitor in this regard:


One, playlists need to be formatted properly for listeners to be able to run on their device of choice (iTunes, a CD, etc) with a minimum of fuss. This means editing the titles so that player order is kept between conversions to different formats, and a number of other small and important tweaks.

Two, MAKING the playlist needs to not be so intimidating that a newbie doesn’t break down in tears because they got the format wrong.

Failing at either one of these, could result in the mixclub getting a reputation for being exclusive and difficult. And you see the irony, that an enthusiastic manager could lean very much into one solution, solely out of desire to make sure people feel the mixswap is easy and low pressure, and inadvertently make it harder on participants on the other end of the axis, thus creating the atmosphere of high pressure and elitism.

There is of course no easy solution, and this falls under the broad category of “competing access needs.” But it seems a very specific category to me, that a lot of different fields face - most importantly sharing the characteristic moral irony. If you go to the ultimate lengths to make newcomers feel included on one access axis, you might actually be making newcomers feel excluded because they now have all these rules they need to follow.

I think of this when I browse reddit where various subreddits are passionate about free-flowing and uncensored discussion, but in order to keep that have a great deal of intimidating rules about formatting and on-topicness and whatnot. Or when I see old GMs talking about all the demands a LARP should satisfy in order to be welcoming to new players. Or of course, discussion about the right words to use to refer to oppressed groups, that can involve high level knowledge of academia or the latest fashion in slang to even participate.

It’s important to understand that in all of these cases the rules enforcement is done with the best of intentions. It’s just important to see whether it is really working out well or not. Are you adding or reducing stress to your most marginal participants? And this is a hard thing to monitor, since your marginal participants are by definition the ones least tied into your feedback loops. If someone doesn’t bother to listen because songs are hard to load you might not know, but also if someone doesn’t make a playlist because formatting is too hard you also wouldn’t know. These are the areas where we are just most likely to fly by the pre-established rules we believe in, and think anyone who deviates from those rules just doesn’t care as much as us.

There tend to be three solutions to this dilemma:

  1. Decide that one type of newbies really is more relevant than another (because of size of population, or their previous experience, or what you are currently lacking, or just your personal bias) and just go full-throttle on protecting their experience while not worrying about the weight incurred on the other type of newbie. This may in some cases be the right call, though it’s easy to do this while being an asshole too and that is to be avoided.
  2. Put a great deal of thought and effort into striking the exact right balance, considering concerns of all involved groups, figuring out the comparative advantages where one group’s needs can be satisfied with minimal cost to other groups, and intervening directly to solve problems (like edit playlist files) when necessary.
  3. A return to deontology, and declare “if I don’t want there to be gatekeeping, then I do that best by not gatekeeping.” This is in some sense “privileging inaction” but in the Kantian sense that the best way to achieve pacifism is for everyone to stop fighting, starting with yourself, rather than hoping you can enforce non-violence with violent power over others.

They all have their merits, and different situations will call for each. All we can preach here is awareness: rather than believing “I am trying to be inclusive and if others don’t agree with me they just aren’t as inclusive as I am”, seeing “this is the choice inclusivity requires, and I am resolving it with this particular solution, but I can see how they care about the same principles but think that other solution is better.”

Relatedly: Freedom always requires someone else be limited, and becomes a discussion of who gets the freedom to enforce what.

Thursday, May 17, 2018

Geeks, MOPS, and Sociological Essays

David Chapman wrote a famous essay called Geeks, Mops, and Sociopaths that looks at the lifecycle of geek subcultures as an arc from innocent creators to being overtaken by cynical exploiters.




These essays are all hitting on *something* but doing so in such a morally loaded manner that they start missing the Real early on. The best form of essay like this starts with an idea based on their observations, but instead of just inductively growing from that idea, sees how it can lead to explaining OTHER observations. They just sound pretty judgy and tell us why we should dislike the sociopaths etc so much (even if they nominally say “I’m not saying these people are evil.”)


Here is instead the bambamramfan modification:


  1. Geeks are obsessive creators who make a new thing, that every so often catches fire with other obsessive creators, and they make a super-charged circle of people making awesome stuff. It can be a new artistic movement, or people playing with computers in their garages, or competitive rubix cube solving. This category includes people who aren’t creators, but just really love that thing.


If you have read the famous Infamous Brad essay, this includes both Dream Nazis and Authenticity Policy https://bradhicks.livejournal.com/128514.html

2. Mops are people drawn to the POSITIVE ENERGY of this. Being around a bunch of people who are excited about something is itself very exciting. There’s just a magic in the air, the group is breaking boundaries and forming bonds and it is all very intense and creative. Even if you don’t care about the thing qua the thing, it can be fun to just be around that community. This includes the third part of Bard’s trichotomy, the Fun Mavens.


Hakim Bey talks a lot about this in his landmark essay Temporary Autonomous Zones. https://hermetic.com/bey/taz_cont


Note this really applies to political groups too. It’s *fun* to be in a revolution. It’s fun to think you’re overthrowing the patriarchy/the SJW bureaucracy. This is all the dark-web or pirate-radio stuff. They are not only exploring new intellectual territory, but they’re also having fun while doing it. This attracts people who… just like fun.


On a purely economic level of course, you can see this as the transition from “people who program for the love of programming” to “people who realize you can make a fuckton of money from this.”


3. Systematizers. (Shockingly, “sociopath” is a really bad word here for any hope of clear communication.) These people see the geek subculture, and the positive energy and think… this could be better if only they got organized. The political geeks could actually be organized into activists. The tech geeks could get venture capital and HR. The art geeks could modify their product just a little and then I could sell it on the mass level.


This is usually not experienced as a cynical ploy. They see something wonderful, that’s just ordered dysfunctionally, and they want to help by bringing their logistical skills. Maybe they’ll take a cut for the value add they bring, but it’s not the primary motivation: the motivation is to “spread BLANK to the masses.” And of course, the original geeks and mops love that. Being part of an organized thing means validation after all.


But one can not serve God and Mammon. Once you’re doing what the Systematizers advise, then you become more dependent on the value they’re trying to mine, and anything that threatens that value (money or public acclaim) has overwhelming pressure to make the geeks shut up and get in line.


(Even social justice, I believe, is a very good geek idea that has been completely taken over by systematizers who use it to sell their website and increase their twitter presence. Let alone how this played out with say, comic books.)


***


This is the part of the essay where all the above authors say “and HERE is how you can stop it, and keep your group pure.” Well go back and read TAZ more closely: you can’t. It’s an endless process of creation, rise, and decay. Either you group dies or it ossifies. Die a hero or live long enough to become the villain. Am I cliche enough yet? There’s no stable system where you don’t grow like that, anymore than you can tell a colony of fruit flies “don’t eat all the nutrients in your environment or else the colony will die.”


What you have to do is embrace the rise and fall. Don’t tie yourself to rationalism or battle-bot-building, thinking “it will always be about the music.” Enjoy the Mops and their energy. Resist the Systematizers for a while. But once they’re deep enough in… just go somewhere else. Make a new geek source of energy. Enjoy it while it’s small, and don’t spend all your time dreaming of how great it would be if EVERYONE was part of it.

Tuesday, May 8, 2018

Axiology / Morality / Law

Or the A/M/L distinction that comes up frequently.

From http://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

Axiology is the study of what’s good. If you want to get all reductive, think of it as comparing the values of world-states. A world-state where everybody is happy seems better than a world-state where everybody is sad. A world-state with lots of beautiful art is better than a world-state containing only featureless concrete cubes. Maybe some people think a world-state full of people living in harmony with nature is better than a world-state full of gleaming domed cities, and other people believe the opposite; when they debate the point, they’re debating axiology. 
Morality is the study of what the right thing to do is. If someone says “don’t murder”, they’re making a moral commandment. If someone says “Pirating music is wrong”, they’re making a moral claim. Maybe some people believe you should pull the lever on the trolley problem, and other people believe you shouldn’t; when they debate the point, they’re debating morality. 
(this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it) 
Law is – oh, come on, you know this one. If someone says “Don’t go above the speed limit, there’s a cop car behind that corner”, that’s law. If someone says “my state doesn’t allow recreational marijuana, but it will next year”, that’s law too. Maybe some people believe that zoning restrictions should ban skyscrapers in historic areas, and other people believe they shouldn’t; when they debate the point, they’re debating law.

I’ve been to write a lot more about this distinction and all its implications (like, write an entire book.) But to generalize it further – from an axiological perspective, the OP is correct that all social order no matter how unspoken is order and we should not deny that.

But from a morality/community perspective, we prefer something more concrete in defining social order. We still understand there are unspoken rules and ambiguities, but we acknowledge some unofficial rules like “the family patriarch” or “the gossipy sewing circle.”

And on the legal level, we don’t want to acknowledge any order than what is explicitly written down and we can pretend is objectively verifiable.

Where most Marxists go astray is that they are so committed to axiological values (like “no one should go hungry, even if that’s not one of your written down civil rights”) that they lose sight of how much normal people really really like having a distinction between that and the moral and social orders. It may be “good” to wish everyone in the world is well taken care of and loved, but almost no one wants the responsibility for doing that for the whole world themselves, and when you make people have that responsibility they become extremely unhappy and anxious.

 kenny-evitt
Are you not a consequentialist with respect to your ideal axiology?
Consequentialism is always the axiology answer. It does not care about the community or law, except as tools to achieve an optimal result, but always judged by your axiology. This is one of the many ways the True answer of a question will be the axiological one, and axiology can back itself up with facts and arguments the best.

That being said, you would have to be a bull-headed social engineer or philosopher to not realize that people are not pure axiology. They do not care just what the “best” action is, but what is allowed by the community and what is legal under the law.

(Imagine a law saying you are forced to marry… the person who will make you the happiest. Yes in some sense that encourages “more happiness”, but people would rebel over that coercion over the private sphere in a split second. I would rebel.)

Understanding how these three types of morality intersect are not really valuable as an ethical matter – since yeah, consequentialist axiology still wins the ethics – but they are key for building an accurate model of how people work and what will make them happy.

Tuesday, February 13, 2018

What is Rationalism?

I am not a rationalist, but this blog makes many references to being in dialogue with the rationalist community. So the few people who AREN'T rationalists, might appreciate an explanation of what the hell I am talking about.

Slate Star Codex, by Scott Alexander, is pretty clearly the most famous and public-friendly face of the rationalist movement at the moment.


A lot of his thoughts come from the Less Wrong movement, best known for Harry Potter and the Methods of Rationality. You can google them if you want more content, but Scott puts a much nicer and more humane face on them.

One tumblr that defends rationalism but with an extremely compassionate and thoughtful face, that most of my dashboard at least respects greatly, is The Unit of Caring.
I don't know any particular post of her to recommend that summarizes her (which is "rationalism but nice about it"), but this is a good recent post about "how to be nice." 

Wednesday, January 31, 2018

Machine Learning and Ideology

Article: AI BIAS DOESN'T MEAN WHAT JOURNALISTS SAY IT WILL MEAN

The current political conversation over machine learning apps is so damn depressing.

(Also related: https://discoursedrome.tumblr.com/post/167714252325/joseph-heath-how-to-beat-racism )

Depressing, as in a missed opportunity for what could be amazing.

From rationalists, to social justice types, to every strain of thinker about our conflct-happy culture, we talk about “bias” a lot. Maybe it’s tribal bias to “like your ingroup” or maybe it’s structural bias that you need to cure yourself of, whatever. Is it the way our brain is built, our genes, what books we read as kids, our original sin of whiteness?

Zizek (and some other critical theorists) instead say the bias isn’t anything inherent in us. “Ideology is our spontaneous reaction to the world.” You drop a perfectly rational creature in a situation, it looks around, collects some data, then generates conclusions and a narrative based on that data.

A Martian looks around and sees many of the poor people are lazy, and the rich people are energetic. They conclude “wealth must be the result of how hard you work.”

With greater knowledge, we see the errors in this. Poor people may have learned helplessness, while the rich people perform useless work in order to feel meritorious. Or one culture literally conquered and stole stuff from the other, and just coincidentally it matches up to our lazy/energetic divide. Or our definitions of lazy and hard working and rich and poor are hopelessly loose and impossible to scientifically verify.

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.

Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.

So that’s the debate: is bias something we bring to the table, or a spontaneous result of rational observation of limited and complicated data environments?

Which makes the thing about machine learning controversies fascinating! We write some programs whose job it is to collect the data and give us conclusions… and it turns out they’re racist! They copy insults other users of social media are using, the tell you to stay away from minority neighborhoods when you walk home, rather than pure electronic angels, these bots fall into all our worst, lowest common denominator impulses. How could this fail to be interesting?

Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.

No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”

*****

 dedicating-ruckus

Rather than the traditional “we would see everything correctly if not for our bias”, with the clean slate neutral observer being the perfect judge, Zizek sees that biased ideology is our starting point, and we need to gather more information to climb out of our first, spontaneous biases.
Ethically speaking: you need to arrive at universal principles yourself (like that all humans are equal), but just going with the observable data is not going to tell you that.
Okay, but this is… like, false.
If you put someone with no preconceptions or relevant entanglements into a situation and let them come to conclusions, the conclusions more or less by definition cannot be “biased” in the common political sense. The alien comes and sees that many rich people are hard-working, and many poor people are lazy, and they come to the conclusion that wealth is related to how hard you work. And it’s correct.
(And of course it’s more complicated than that, and a full investigation of “what factors lead to wealth” will come out with a whole laundry list of factors including genetics, various character attributes, general social situation and pure contingent luck. But “hard work” will in fact be prominent in this list of factors. The more detailed analysis filled in and detailed the naive pattern-matching, it didn’t invalidate it.)
“All humans are equal” is true in a particular moral sense, but “all humans have an equal capacity to acquire wealth” is just sky-is-green obviously wrong.
Instead we get a very predictable partisan argument. People on the right say it’s because all these stereotypes are objectively true, and now AI is validating them. And people on the left say that the original sin of implicit bias has managed to leak over from the programmers to their children. That if the right people had programmed it, the bots wouldn’t be so full of nascent ideology and *isms.
No one just accepts “if you put a pattern-matching mind into a situation with limited data, it will come to unethical conclusions.”
In other words, “coming to the best possible conclusions given the data available to you is unethical”.
So if the disinterested pursuit of truth leads you to conclusions that violate certain previously received and unquestionable moral axioms, you must seek more information and overcome your biases until you can get yourself to conclusions that match with what you previously thought. Never mind if the extra information you seek continues to lead you down paths of wrongthink. Continue investigating until the answer is acceptable!
Also, make sure you walk through bad neighborhoods on your commute home, otherwise it would be racist!

*****

If our neutral alien were to land in early 1800′s America and look at black versus white people, they would probably conclude that black people had dramatically less intellectual capacity than white people did. Because they really weren’t as educated by most metrics.

Of course it was because they were denied education (and otherwise punished for being too smart.)

Look at this like a proper Bayesian: your priors are important. You can for instance start with “brains of roughly the same size and structure are probably roughly similar in outputs”, and even though you see a lot of examples of black people being able to read less than white people, you can hold firm to that prior until you’ve untangled the confounding nurture variables.

I would not stick to priors I had reason to believe are wrong. Just my prior of “most-to-all people are capable of complex thought, emotions, and motives” overrides almost every statistical inference I would make if I were a naive frequentist.

There’s nothing wrong with saying the current state of some people are such that they commit more violence or can read less. In fact, denying that is the sort of liberal idealism that refuses to face the ugly degradation of poverty, how being low class DOES make you worse off than others. But this is very different from the essentialism of saying “all people of that race are like that, and I don’t have to look at any individual case or check for other reasons they’ve become like that.”

Tuesday, January 16, 2018

Three Types of Economies

We discuss capitalism, communism, and anarchy a lot, which are both modes of society, but also references to types of economic exchange. The critique of society often gets confused for the economy type, even though these are rather separate things. So for the sake of simplicity and clarifying, let’s talk JUST about the type of economy today.

The thing people forget about the economy - because we get caught up in accusations of selfishness and assumptions of power - is that it is at heart an information problem, or rather, a method of solving this problem. We have some Stuff, it is distributed among different people, with different degrees of making it or acquiring it, and we would like it to be distributed to the people who need and want it the most, or can make the most productive use of it. Even if we solve every human sin regarding selfishness and power, we will still need to solve the problem of “where does stuff go.”

(Most people are in fact good, and willing to work for the overall good if there’s a solid plan - but just being altruistic isn’t enough to figure out how to feed the hungry.)

There are three different types of economies based on how much information you have. The information is in the form of “what resources do you have? How hard is it to extract them? What do you want? How much of it do you want?” You generally know all of this for yourself, and only some of this information for other people. Ie, you know how hungry you are and how much food from McDonald’s you want, but you are less likely to know how hungry your friend is, or what they are in the mood for.

***

Trust Economy - this is the type of economy where you have TOTAL INFORMATION about everyone (relevant to the problem in question.) Picture a family who has known each other for a long time, sitting down for a meal with food closer to some than others. They know who likes what dish, who needs more food, and they can see what is already on each other’s plates. They can quickly just give each other the food other people want, and judge what a fair allotment is weighed for how important things are to each people there.

We don’t think about this much, as it’s the behavior we do instinctually when the problem is so small we don’t even see an economy is going on. But whenever you are with a group of people who get along, and you’re distributing a limited resource (a turn at the videogame, snacks, conversational space) in a manner that’s trying to be considerate of everyone but is so easy you don’t even think about it -- that’s trust economy.

This is most associated with anarchy. Who needs rulers when people can just help themselves and each other?

Command Economy - This is for situations where a MODERATE amount of information is known by some parties. This means the capabilities of people, the total resources at play, the rough size of demands people need met, etc. But not everything, like “how will widget A fit into device B, who is sick today” or just because one person has a lot of information that does not mean everyone is informed. Most people’s workplaces - be they corporations or the government (especially the military) operate like this. There is one central unit who establishes the group goals and directs where resources go and who should be in which departments… and then lets the individual departments figure out their own structure and how they will use those resources, and those departments will usually just give their managers targets, and let the managers command employees to figure out how best to meet those targets.

(Or it’s just when Dad orders for the whole family at the restaurant.)

The advantages of this are great - you can focus everyone on an important goal, and just because one person happens to start with a lot of resources, those can still be driven to where they are needed the most. The disadvantages are obvious - the bottle-neck of any centralization, where if the leader is dumb or selfish or otherwise inefficient, they screw up the entire economy around them (and insulated leaders tend not to be responsive to changing conditions.)

This is associated with communism and war-time economies, though somewhat unfairly, as a communism will often have sub-markets for areas under it where the government lacks information -- just like as capitalist countries always have various command economies under them, within the corporations that make up the economy.

Market Economy - This is for situations for people share NO INFORMATION with each other, and yet you need to figure out how to disperse stuff. People just announce “I will trade X for Y”, and other people who themselves value Y over X can take that, making both better off, without having to know how important or plentiful X and Y were for any people other than themselves.

This is associated with capitalism, though as said above, you find markets in any type of country, and capitalist countries contain many different types of economies within them. Capitalist ideology just focuses on market economies as the type of ideal to attain.

They are kind of magic, because market economies can make large decentralized systems act incredibly efficiently without any central information processing. Economists are in love with this miracle, and it can indeed be pretty great.

But the magic is that they process all this in the adverse situation of limited information. It’s very impressive that they can do that, but that does not make them always the best. They are a good solution to the problem of lacking information, but if you have information, then they might just be inefficient. Resources might be squandered because the person who has them no longer has any reason to trade them away, etc.

***

The point is that none of these economies are universally better than others. They are optimized responses to different situations, and should be used as such. Using a market economy when you have enough information for a trust economy creates a dreadful waste of time and effort as every exchange is negotiated and verified. Using a trust economy when people don’t know anything about other people, will just lead to mismanagement disaster.

So when discussing capitalism, communism, and anarchy (as social models) we understandably make reference to the economy types we associate with them, but each of those ideologies are more about exalting one particular tool as the social ideal, even as it makes use of all three. We should instead take the detached view of tools, not looking at them as moral imperatives, but useful responses to different situations.