Monday, December 5, 2022

Sim-Modernism

I've been thinking about the endless war between Modernism and Post-Modernism. Not even a debate, since so much of the discussion is about even defining the borders and who is on what side. To restate things:

Modernism is the James Scott sense of a belief that we can figure out the rules and principles - behind the universe, life sciences, sociology, morality and ethics - and our leaders can apply them to create an ordered and just society. The Enlightenment was big on Modernism, and really Marxism was its apex. 

Post-Modernism is the post-WW1 understanding that none of these rules ever consistently work and the world is nigh-impossible to understand, and any rules we *think* we have about the world are really just stories we tell ourselves, to justify our position in society or to advance one political side over another. It's very meta, often in the Bulverist sense. 

(Pre-Modernism just being the regular traditionalist "I do this because my ancestors' ancestors were doing this hundreds of years ago. And because God, speaking through His divinely chosen ruler, tells me to or else He will strike me dead.")

A key thing about both of these philosophical schools is that they can't really be disproven. If a modernist regime fails, obviously they just had bad laws instead of good laws, or the people didn't try hard enough to stick to these principles. And post-modernism never fails, it's failure is just a story *you* tell to justify your opposition to it.

(Really you should think of both of these as tools to analyze situations with, sometimes even using both tools on the same situation and asking yourself which is more useful in this moment. But anyway.)

I'm not going to resolve the war between these things today, but just talk about another *modernism I see that is neither of these.

***

Let's talk about World of Warcraft.

It's the extremely big MMO that has had millions of players for well over a decade now. In the beginning, we knew nothing about the undercarriage of how the game worked (what items dropped from what, how good they were, etc.) Over time we figured out some principles, and we got large forums called "Elitist Jerks" where people argued over the best classes, the best ability rotations, the best gear, etc etc. They referenced guides and came up with general theories and had vibrant conversations. This is clear modernism.

Then almost all the conversations died, and people talk very little on the forums last I checked, compared to years at its prime.

What happened? Politics and post-modernism?

No, what happened was Big Data.

Eventually someone built a downloadable tool called simcraft. It knew every spell, boss, item, and talent in the game. You could just input all your gear, the skills you used, the order you used them in rotation, the general style of fight and... hit a button and it would tell you how much damage per second (DPS) you would do with perfect execution.

This is in some ways a cludge. Few of us have perfect execution, after all. And this doesn't cover utility, tanking, and healing and so those needed other sims with more assumptions. And there were a number of errors anyone could find with its assumptions. But it at least gave a concrete answer to argue over.

Now anytime anyone had a question of "is this talent underrated, because combined with this weapon, on this fight you could..." and the only answer would be "sim it." If sims reliably showed your new idea was better, the top raiders would drift to that, and then the way they did things would trickle down to everyone else. No one really had to argue about which class was the best - there were numbers for it.

The "skill" of the world of warcraft community got better, and the discourse of it dimmed. Just sim it.

Now this would just be an anecdote about games, except for the fact that Big Data is entering more and more of our life.

If you have any problem that can be addressed by throwing it at GIANT FUCKTON OF DATA, now people do that. We often don't know *why* the correlation between two things is the way it is, but we know it's correlated now.

Our incipient AI's aren't Asimovian entities built on three principles taken to their logical conclusions. They're neural nets trained on a ton of data and reinforced with adjustments to hell and back. They give very good answers (and beat us at boardgames.) We generally don't have any modernist explanation for what they are thinking or what rules they are following. 

While manipulating training sets is as old as data science, with big data we are talking about sets too big for naive actors to change a few datum and get the answers they want (plus part of this mythos is that anyone can run the simulations themselves if they want to.) The simulations are still built very much on human error, but they are too large and incomprehensible to be easily hijacked by postmodernists into giving the simple answers they want (or rather, that they claim ideologues want.)

This is Sim-Modernism.

We don't just see it in videogame sims and GPT outputs. We see it when someone asks our favorite route from NYC to Philadelphia and we answer "...I just follow whatever Google tells me to." We see it in the most famous political prognosticator of our era not making a simple political model (like Sam Wang would), but rather the model with the most inputs they can imaginably throw in, run the simulation 10,000 times, and see what the results look like. Nate Silver has some idea why his models will favor one party or the other, but he still is in the dark often on what is going on "under the hood." We sort of see it in Tetlock's Superforecaster-ism. And most of all we see it in algorithms on social media and video sites, that are trained to get the most "engagement" from audiences, and so start throwing up bizarre recommendations that no tech executive would have predicted or even wanted.

A lot of the answers Sim-Modernism gives are pretty good! And even more useful, they are plentiful. Sim-Modernism isn't limited to theorizing what a good novel is, it can generate a new one in seconds, or hundreds of new novels for you to read, once it gets good enough.

I'm not celebrating this as "WE HAVE THE ANSWER that cuts the Gordian knot of modernism." Sim-Modernism does get more accuracy than either regular Modernism or Post - but it's obviously scary in its own way. It means running or being a part of a system that you don't know how it works or where it is really leading you.

(Has anyone else had the experience of driving well out of your way because Google says this path is faster, only to find it eventually requires you to drive through an area you wouldn't have - either because construction means it's really blocked, or it's a suburb that feels like cheating to treat as a bypass?)

And, it will feel sad, in a humanistic way, to live a life that is more efficient but not to understand any of its underpinnings. Do this because "the sim said it is optimal" is not a lot more satisfying than "because your father did and your father's father did..."

And of course, we will have to deal with "whether a computer code that is just regurgitating predictions based on a very large sample of text" is a person or not when it answers questions.

------

Note, this is not the same thing as ontology-thru-markets, be it the Wall Street Stock Market or the smallest online predictive market. Those are most driven by intelligent, optimizing agents interacting with each other and then iterating on what will get the best results given other intelligent optimizing agents, who then iterate based on that. Which is interesting, and profitable, but not really just running models and regressions through a very large load of static data.

Tuesday, August 16, 2022

Modern Virtue Ethics


Watching RRR - which is a maximalist Indian film about the conflict between utilitarianism and deontic ethics - reminded me that ye olde “virtue ethics” is the red-headed step child of online discussions. Sure, maybe you know it’s been kicking around since Aristotle (and will henceforth be referred to as Aristotelian ethics, or AE), or that for a long time this was the dominant ethical framework of Europe, but now what does it mean? AE is often framed as “if you do something with good intent.” Sounds nice but how can that really compare to “making the best result” (consequentialism) or “doing right instead of wrong” (deontology). Caring about your intent about all just sounds so self-centered.

So, ignore that. Wipe everything you know about virtue ethics clean, and approach this as just a blank state. Instead, picture this:

“Aristotelian ethics is a marriage between consequentialism and deontology that best approximates how we actually feel about right action.”

What the hell does that mean?

Well, if you are in this discourse sphere, you are familiar with “Terminal Values.” This is understanding what thing you measure your success by. An easy one is just “maximum number of lives.” Or perhaps maximizing knowledge. Or if you’re a libertarian maybe you say freedom is your terminal value. Or amount of joy in the world. Other well known terminal values are the infamous paperclip maximizer, or villains with the famous “blue-orange” morality spectrums, etc.

Here’s the dirty little secret about terminal values: they are almost always all *instrumental values* too. Knowledge is prized as a good in of itself… but also helps us effect the world so that our lives are better in a number of ways, and that even feeds into us being able to acquire knowledge faster. More people being alive means a larger community advancing our values and creating a thicker network that we exist in. More people alive means we have more choices which means we have more freedom… which is a terminal value too but freedom also allows people to make the best choices for themselves and lead to the best outcomes. 

Being a person who does not lie and helps others can feel righteous in of itself… but also makes it easier for others to trust you and work with you, making your life better and more successful.

So, do we truly value knowledge for its own sake damn everything else, or do we value knowledge because it adds to our toolset that helps us accomplish overall utilitarian goals.

The answer is: yes.

Aristotelian ethics has an image of the “good life” that includes knowledge, as both something fulfilling on its own AND something that makes us better at accomplishing our goals, including gathering even more knowledge. These simply aren’t separable. 

A good life includes freedom, which makes you feel less trapped on its own, and also helps you attain goals that were stymied when you were trapped.

A good life is healthy, which makes you feel and look better now, AND gives you more years of life later. A good life treats others well both because that is right AND you get farther in life as a cooperator than as a defector.

It’s like saying “don’t cut your arm because that will cause pain, AND ALSO you use that arm for things.” Your arm - and your health, your freedom, your knowledge, your honor - is part of an organic whole. It IS YOU and it helps you accomplish things.

(Obviously this “organic whole” can scale from your one life, to the entire picture of society, which is maximizing lives, joy, knowledge, freedom for everyone in the society as it grows itself.)

There are things that make you feel good but destroy your abilities - like heroin. AE says don’t do that. Or there are things that have better consequences but narrow who you are - like going to a soulless finance job just so you can live on instant-ramen and donate all of your income to malaria nets. AE hates that. These things are not “the good life” and do not “broaden the organic whole.” There are indeed values that are not part of Aristotelian virtues.

You can dislike this approach, but you can’t really argue against it, because it is a postulate in of itself. It can’t fail consequentialist measures because it’s not trying to. It’s very hard to argue about a first ethical principle, after all.

The reason AE has appeal is because it is intuitively how so many people act and feel. Most people at most times do not want to entirely ignore consequences just to Do the Right Thing. But most people do not want to ignore and torture their conscience for years just because it will lead to a marginally better off long-term outcome. We want to… flourish. We want to be good in many ways - health, morality, intellect, happiness - and we rather all the various societal numbers go up rather than down.

So, when the “virtue ethics” response is “what would a good person do in this situation” it does not mean “good intentions are what matters” but “what adds to my flourishing and society’s flourishing, on the many axes that matter to us?”



***

Hat tips to https://an-irrelevant-truth.tumblr.com/ for the tumblr image, and https://twitter.com/peligrietzer for discussing the idea generally.