or, Forget “the most good”. Can we do any good?

[High confidence] We cannot tell if any human action is overall good (“net positive”) or overall bad (“net negative”) in terms of moral value.

  • Any action has short-term and long-term flow-through effects. Some of these effects will have moral relevance (i.e. they will be good or bad).
  • These effects will be infinite or near-infinite in number. I can see this through a thought experiment in which I reflect on a single action and imagine how it could flow through to affect other people, animals, invertebrates, and so on, both now and in the distant future. There is no way we could ever track all of these.
  • The moral direction of many of these flow-through effects (i.e. whether they are good or bad) will be contingent on the historical context. History, in my view, is intractably contingent and unpredictable. So, even if we could track many of the flow-through effects of any actions, we would have no way of telling which effects are good and which are bad. The more I learn (both through study and through working in animal advocacy), the more clear this becomes to me.
  • Therefore, we are completely clueless about the net moral effects of our actions.

[High confidence] Under some rare and limited circumstances, we can confidently gain insight into some effects of some human actions.

  • The only way to achieve this is to develop a theory of change and then validate it through empirical falsification. Otherwise, we are simply making hypotheses without confronting reality, and such hypotheses are frequently or usually false.
  • We can also use such methods to understand a few of the main knock-on effects of any intervention, but surely not all of them.
  • (I acknowledge that other people will have different standards of evidence to what I do. I will defend my standard of evidence until I am blue in the mouth, because I can see very clearly how easy it is for us to delude ourselves with our ideas unless we confront our ideas with reality.)

[High confidence] We have little or no idea about multiple areas of moral philosophy that should really be fundamental.

  • We have no idea what the “correct” moral theory is. There are many competing theories, each of which seem plausible but many of which are mutually exclusive. There is no consensus. Worldview diversification can help overcome this to some extent.
  • We have little or no idea what consciousness is or how it is distributed. There are many competing theories, each of which seem plausible but many of which are mutually exclusive. There is no consensus. Empirical developments could easily reverse any idea we have about consciousness - maybe only humans are conscious. Maybe all inanimate matter is conscious. We simply have no idea.
  • In many practical circumstances, we have little or no idea whether existence is good or bad, whether non-existence is good or bad, whether life is good or bad, or whether death is good or bad.
  • This is very similar to the view of Kat Woods in their article here.

[Medium-low confidence / implication of these views] Under this view, we can’t have much faith in the idea that longtermist interventions will work.

  • We have no validated method for affecting the long-term future. All it takes is a cursory glance at history to see how quickly different civilisations, moral systems, religions, cultures, even species, etc rise and fall. There has been no point in history when anybody has taken an action that resulted in a specific, intended, reliable and predictable effect that materialised or lasted more than, say, a couple of centuries after the action. (Some of my colleagues disagree.)
    • The only exceptions I can think of involve building large, physical structures (e.g. the pyramids of Giza, which have so far survived for several thousand years) or acts that permanently destroy particular entities that can only persist through self-replication (e.g. some acts of genocide destroying cultural groups, or perhaps driving some animal or plant populations to extinction). Neither of these provide any insight or value for longtermist interventions or policy.
  • Today, our knowledge is even more limited due to the possibility of society passing through a singularity in the next few years or decades.
  • As above, we can attain knowledge about some effects of some actions by developing and empirically validating a theory of change. However, the value and relevance of such knowledge will only exist for as long as the empirical context doesn’t meaningfully change. If society changes drastically, a theory of change developed before the change will no longer be relevant to after the change.
  • To give one example (there are countless examples from history), the Second World War in Europe and its aftermath profoundly affected the very fabric of European society. This is argued in outstanding detail by Tony Judt in the book Postwar: A History of Europe Since 1945, especially the first couple of chapters in that book.

[Medium-low confidence / implication of these views] Under this view, we can’t have much faith in the idea that moral circle expansion will work.

  • Simply put, the same criticisms above (longtermism) would apply equally to moral circle expansion.
  • This bit is more of a tangent, but I’m not really convinced by the existence of a moral circle or the hypothesis that it has expanded over time.
    • Technology is advancing over time, because the technology and ideas developed by one generation are still available to the next generation. So, as people are born and people die, the development of technology continues unbroken.
    • [Medium-low confidence] But morality works differently. Every newborn human needs to figure out how the world works for themselves. They can be guided by previous generations’ moral views, but they cannot access them directly (as they can technology). So, even if moral ideas develop over time (e.g. in the philosophical literature), this is not the same thing as morality itself advancing over time.
    • [Medium-low confidence] My competing hypothesis is that people have wider moral circles now because a) laws are good, and b) people are materially wealthy. I am willing to bet that if you take away either of these, the moral circle will suddenly shrink. Some historians take this view too, and this hypothesis has been explored (and challenged) in detail by historians. (For a rigorous commentary on society, go read some actual peer-reviewed review articles - the only thing I can offer is half-baked takes by a non-expert!)
    • Another way of saying this - if you take a person from society A at birth and put them into society B (assuming they have no knowledge of this), their moral values will be closer to those of society B than society A. This would be true regardless of whether A and B are separated by space (e.g. a wealthy Australian family or a poverty-stricken region in Afghanistan) or by time (e.g. the present day or 10,000 BCE).
    • [Low confidence] Perhaps an analogy for moral circles that is more appropriate than that of technology is that of language. Language is passed from generation to generation, but each human infant - while helped by their parents and peers - must learn it anew. Language shifts over time, sometimes in predictable ways, but not in any inherently “good” or “bad” way. Furthermore, the complexity of language is limited by biology, so it can’t even remain on a single trajectory (e.g. becoming more advanced) over time.
    • Furthermore, I actually think people do not treat others (or themselves) well to begin with. The more I reflect and meditate on human behaviour, the more I can see the violence and suffering that people commonly inflict on other beings (and themselves). I can see no reason to suspect that such violence or suffering is less than in the past. (George says there is a lot of debate about this question, not great evidence either way.)
    • And lastly, there are many examples (given by other authors elsewhere) where particular entities - even those who do seem to warrant moral consideration - seem to be treated worse today than they have been in the past.

[High confidence] “Doing the most good” does not necessarily mean being involved in the EA community.

  • My favourite view of EA is the one that was posted on the forum recently - that EA consists of three ideas: 1) scout mindset, 2) radical compassion, and 3) scope sensitivity.
  • All three of these are really cool, and I’ll take them to my grave. But I’m not aware of any strong evidence suggesting that the best way to cultivate these ideas is to participate in the EA community.
  • Being part of the community surely has value in some circumstances - it can connect you with like-minded folks, expose you to new ways of looking at the world, provide emotional validation for your beliefs, and lead to fruitful collaborations. I think these aspects of the community are beautiful and admirable, and I’ve been fortunate to benefit in all of these ways.
  • But being part of the community also has downsides in other circumstances - perhaps it can expose you to ideas that turn out to be false, or engaging in day-to-day community activity can distract you from actually working on impactful projects.
  • Moreover, many EAs are enthusiastic, compassionate, and hard-working individuals, but are nevertheless not always experts in the fields in which they are aiming to have an impact. I suspect [medium-low confidence] that this can increase the number of false ideas in the community.
  • Taking all of this together, being part of the EA community is not necessarily the same thing as actually having impact. Perhaps a good analogy might be with university study. To help people recover from disease, it may help to go to medical school. But once you’ve done so, you don’t necessarily need to hang around medical school anymore (though some small participation may be beneficial). And there are other ways to help people recover from disease beyond going to medical school in the first place.

So, how can we do any good?

  • If I’m right that we are completely clueless about the net effects of our actions, then the net effects of our actions cannot form the basis for any theory of morality. In short, we cannot tell if we can do something “net positive”, so we might as well not try. My views about the world are probably consistent with moral nihilism, which would suggest that we might as well not try to help others. However, I simply do not want to live that way.
  • We can do some good for somebody, even if we have no idea about the net effects of such an action. Whether this is strictly good or not (in terms of consequentialism) could be disputed, as in the above point about moral nihilism.
  • So, in the context of doing good, I only want to choose actions that can be supported by a theory of change - interventions supported by a robust, validated, and falsifiable theory of change. There may be a few different interventions that meet that criterion, in which case we can choose the one with the highest expected/measured impact.
  • In other words, we cannot do “the most good”, and we cannot even do any good, if you take “net positive” as your measure of good. But you can still help small, defenceless animals being kept in cages to suffer a little bit less.

Why I’m not publishing this widely

  • Publishing these ideas publicly would necessitate developing and defending these ideas more rigorously, and then engaging with comments at least to some extent. This would take a large amount of energy.
  • I am not convinced that doing so would change people’s behaviour in a “positive” direction (if my view of “positive” is actually correct).
  • Therefore, I would rather spend my energy actually working on a project that has some positive value for the world - i.e. beginning the prioritisation for a new charity or project that I can then launch.

Misc notes on wild animal welfare [less developed - just writing down points as I think of them, haven’t reflected on these too much yet]

  • I think the Tomasik-style anti-natalist wild-animal interventions (e.g. gravelling lawns) are actually pretty similar philosophically to on-farm welfare reforms (e.g. cage-free).
  • In the Tomasik-style interventions, you’re killing animals and also stopping more animals from coming into existence (who are, mostly, not replaced by other animals).
  • In the on-farm welfare reform interventions, you’re stopping one cohort of animals from coming into existence and replacing them with a different cohort (this can be shown using the same type of thought experiment as Singer did to show that all actions result in a different set of people coming into existence, which from memory he applied to climate policy in Practical Ethics). The new cohort is farmed under slightly less bad conditions. But there is still animal killing happening (you’re still letting farmers kill the animals who are there right now).
  • The main difference is that in Tomasik interventions, there is no new cohort - but in the on-farm one there is.
  • So the main question is “morally, how do we trade off pain and pleasure for beings who do not yet exist” and “empirically, what is the volume of pain vs the volume of pleasure experienced by wild insects/whatever”
  • If we accept the proposition that the farming industry is net bad (which I do, not everyone would) then, I think, we are forced into accepting the proposition that whether or not existing as a wild animal is bad is an entirely empirical question
  • (It’d be hard to defend the proposition that we have a responsibility to farmed animals but not wild animals)