Monday, February 13, 2017

Question 2 for Humanism Blog

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”
I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.
It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.
So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?
I’ll start.

This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)
My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.
It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)
Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.
The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.
If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.
But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.
Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.
Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

No comments:

Post a Comment