These are some notes I took in early June 2024 on the top of deep uncertainty and complex cluelessness.

Problem statement

The problem seems to be this: if I take an action, that action (may) have a countless number of consequences of various magnitudes, good and bad. So, it’s very difficult to say whether the overall impact of any action is good or bad. This means that if we’re optimising for, say, reducing suffering - how do we actually go about doing that?

Possible solutions

0. My meta-solution

Below are 8 possible solutions that I have encountered to the problem of deep uncertainty. I find some of these convincing (1, 3, 4, 5, 6, and 8), and others unconvincing (2 and 7).

The ones that I find convincing seem to converge somewhat. Generally speaking, those solutions (1, 3, 4, 5, 6, and 8) share these recommendations, at least to some extent:

  • We should continue to act morally, under our best guess of what actions that entails (and neartermist views and animal welfare interventions actually turn out to look like pretty strong guesses)
  • It might be that the best we can do is to avoid actions that suck and choose arbitrarily from a set of actions that seem good or okay (maximal rather than optimal outcomes). One way to do make this arbitrary choice is personal fit/volition/the vibes.
  • We might want to take indirect effects more seriously, i.e. devote more attention to estimating those indirect effects or collecting evidence on them, and perhaps avoiding actions that seem to have convincing negative indirect effects
  • We might want to hedge by taking a few different actions as part of a portfolio (as described in solution #8)
  • Conducting research on the problem of deep uncertainty (as its own intervention) might be an ok bet

Here’s how I plan to act by adopting those recommendations in practice:

  • I’ll continue to work on reducing the suffering of farmed animals
  • I’ll favour interventions that are plausibly part of the maximal set, i.e. interventions that plausibly do not suck, which could simply be selecting interventions that have been endorsed by CE or ACE or whomever
  • I’ll favour direct work rather than meta work, like Animal Ask, so I can more easily examine the indirect effects (as the indirect effects would be constant over time, rather than changing every time we switch specific projects within a meta intervention)
  • When (or even before) I begin work on an intervention, I’ll make sure to set aside time (ideally as part of the project itself but otherwise separate to the project in my own time) to think through the indirect effects in detail
  • If I identify some risks (which I probably will), then I’ll add an intervention that balances that risk to my portfolio (then I’ll repeat the process of taking the time to look for risks…)
  • I’ll probably put a bit more weight on personal wellbeing, as long as I’m working on interventions within that seem to be part of that maximal set, rather than worrying too much about directly comparing cost-effectiveness estimates
  • I’ll probably avoid longtermism

1. Simply continue to act morally, under our best guess

  • Rozendal 2018: “The potential existence of omnipresent cluelessness should not keep one from acting morally. Maybe we are not actually clueless and should just follow common sense. But if we are clueless, it is better to attempt to act morally, so that we can evaluate whether our strategies have worked once we have the tools necessary for good evaluation.”
  • Seems like a decent bet to me - if we’re wrong about being clueless (which seems plausible to me), then continuing to do our best to act morally is clearly great. If we’re right about being clueless (which also seems plausible), there are some reasons to think that doing our best to act morally is still at least somewhat useful.

2. Giving up on altruism (or: resort to inaction, or: in defense of sex, drugs, and rock & roll)

  • This is flagged by Greaves (2019). I think I see this is a more valid solution than Greaves does.
  • It’s relatively straightforward to optimise for your own happiness, or even reduction in short-term animal suffering. Restricting our attention to your personal happiness, beyond some empirical uncertainty, there isn’t really a problem of complex cluelessness for something like your own happiness - you can draw a very precise border, in space and time, around the thing you’re trying to optimise for.
  • Of course, it’s tough to justify only caring about your own happiness (or only short-term moral value). So it is possible to optimise for those things, but under deep uncertainty, it seems akin to burying your head in the sand - it might make you feel better and even be actionable, but you’re still having major impacts (+ and -) that you’re ignoring. Still, if we are truly in a state of deep and utter cluelessness, that’s kinda like saying “optimising for anything outside of ourselves is impossible”, so you might as well optimise for your own happiness.
  • Furthermore, for me personally, this option is simply not open to me. If I stopped acting to help others, I would not be happy (because of how my personality is etc), and I would thus render myself incapable of optimising for my own happiness. Even if I chose this route, the logical consequence would be that I am forced to pretend that I didn’t and to carry on as usual.

3. The “ripples” or “cancelling out” hypotheses

  • Greaves 2016: we have no reason to expect that the effects will become smaller over time (like ripples on a pond), and the opposite seems to be more likely
  • Greaves 2016: under “complex cluelessness”, we have no reason to expect that good effects and bad effects would simply cancel out, and the opposite seems to be more likely
  • But Shiller 2021 argues for a different form of “dissipation”: “(1) Very large sets of samples randomly drawn from a single population are very likely to have distributions of properties that very closely resemble those of the whole population. (2) The sets of futures left open by routine identity-affecting acts have distributions of socio-axiological properties that are as they would be if they were randomly drawn from the set of all previously open futures. (They depart from representativeness to the same degree and with the same frequency as randomly drawn samples do.) (3) If chancy events are ubiquitous, then, for any given act, the number of futures that it leaves open is very large. So, (4) Routine identity-affecting acts are likely to leave open sets of futures with distributions of socio-axiologically properties that very closely resemble the population of all previously open futures. (5) Routine identity-affecting acts only produce very small probability shifts on socio-axiological partitions. (6) Routine identity-affecting acts typically have consequences that quickly dissipate and are thus not massive.”
  • Under this latter form of dissipation, which strikes me as possible, we might actually not be clueless after all.

4. Conduct additional research on cluelessness

  • Rozendal 2018: “If we are clueless in every decision situation, it appears the only thing with any expected value is more research. It would be worthwhile to find out, amongst others, how the expected value of possible total consequences (𝐸𝑉(𝐶)) is distributed. Is it a Gaussian or a Pareto distribution, and with which properties (e.g. mean, variance, and for Pareto: which alpha)? Answering this question would affect our decision procedure strongly. Another worthwhile research direction is reducing our moral uncertainty (specifically axiological) and improving our methods to deal with moral uncertainty. However, I cannot rule out the possibility that we are clueless about whether this research brings about good consequences. Maybe we really are clueless even about whether trying to resolve complex cluelessness brings about good consequences.”
  • It seems reasonable to me that this is an appropriate response to cluelessness. Still, I’m not sure how to weigh this up against “help fish” or whatever, since I’m not sure how likely it is that we are clueless and what the relative strengths of these competing approaches are under different views of cluelessness.
    • If we’re not clueless at all, clearly helping fish is better.
    • If we’re completely clueless, I’ve no idea which is better.
    • If we’re a little bit but not completely clueless, doing more research on cluelessness might be the best thing, but helping fish might be the best thing (following the logic in the solution #1 above). In fact, in solution #1 above, there is a reason to think that helping fish is, in fact, a way to do research on cluelessness.

5. Maximisation, rather than optimisation

  • Herlitz 2019 and Mogensen 2019: we might not know how to do the most good. However, if we have some set of possible actions, we know that some of those actions clearly suck more than other specific actions in that set. We can partition the set into “non-sucky” actions and “sucky” actions, and then choose one of the actions in the non-sucky set using some other criterion (and Hertliz in fact argues for personal volition/vibes as this criterion).
  • I suppose the strength of this solution hinges on the ability to find pairs of actions where one action clearly sucks more than the other action in the pair (e.g. following Herlitz, two charities working in the same city, one of which works against drug use and the other of which raises money for an already rich football club).
  • I’m not sufficiently convinced that it is indeed possible to find these pairs of sets - my own view of causality is that every action has complex cluelessness. Though I guess the onus would be on me to show that donating to the football charity rather than the drug charity causes an instance of complex cluelessness (in the Greaves sense) rather than simple cluelessness.
  • Overall, I find this solution quite elegant and partially convincing, if not 100% convincing.

6. Increase expected welfare using the “third best” idea

  • Ng 2020: “Even if neither the achievement of the overall first best (eliminating departures from welfare maximization in all areas) nor the satisfaction of the second best (taking account of all interrelationships and indirect effects) are possible due to imperfect information and administrative costs, such that a definite improvement may not be ensured (second-best impossibility of piece-meal welfare policies), the government and/or altruists may still do something to increase expected welfare by focusing on areas of major inadequacy from full optimization (like poverty, environmental protection, and animal welfare), taking into account both direct and indirect effects where we have the relevant information, or can obtain without prohibitive costs.”
  • From my perspective, this is basically the same thing as saying “continue to do your best to act altruistically” (see response #1), but perhaps taking indirect effects much more seriously.

7. Thorstad and Mogensen’s heuristics for clueless agents

  • Thorstad and Mogensen 2020: “Clueless agents have access to a variety of heuristic decision-making procedures which are often rational responses to the decision problems that they face. By simplifying or even ignoring information about potential long-term impacts, heuristics produce effective decisions without demanding too much of ordinary decision-makers.”
  • I don’t buy this argument. I’m not sufficiently convinced by the argument used to support these heuristics.

8. MSJ’s portfolio solution

  • Basically, do stuff you’re more confident about, and try to compensate for backfire risks with other interventions associated with those risks. So for animal welfare, you might do some vertebrate welfare stuff, some invertebrate welfare stuff, and some longtermist s-risk stuff.
  • And you can do this as an individual, rather than as a community (mainly due to difficulty with coordination and cooperation).
  • Michael St Jules, in a slack message to me: “I’m still into hedging like I describe in my post, which practically means looking for things that you’re more confident don’t backfire in ways bigger than their upsides you’re more confident in, and/or trying to compensate for potential backfire risks with other interventions.
  • “So, since vertebrate welfare and diet change can do more harm to invertebrates than good for vertebrates, you’d want to make sure your portfolio has a high enough ratio of invertebrate to vertebrate stuff so that it’s net good for invertebrates in expectation under deep uncertainty. (Or you could focus just on invertebrates.) But you also need to be careful within vertebrate stuff (e.g. what if cage-free is worse?) and within invertebrate stuff. You might hedge within each, too.”
  • “And then if animal welfare can backfire in the long run, you might want to make sure to have enough on longtermist interventions with enough expected value to compensate, and for me that’s s-risk stuff (mostly CLR, who seem most concerned with backfire risks). (Or you could just only do longtermist stuff, but that feels too Pascalian to me.)”

Noting sources

Cluelessness (EA Forum topic)

  • https://forum.effectivealtruism.org/topics/cluelessness
  • Simple versus complex cluelessness
  • By evidential symmetry between two actions it is meant that, though massive value or disvalue could come from a given action, these effects could equally easily, and in precisely analogous ways, result from the relevant alternative actions. In the previous scenario, it was assumed that each of the possible people that will be born are as likely as each other to be the next Norman Borlaug. And each of the possible people are as likely as each other to be the next Joseph Stalin.
  • So this situation is not problematic; the possible effects, though they are huge, cancel out precisely in an expected value estimate.
  • Cluelessness is problematic, however, in situations where there is no evidential symmetry. For a pair of actions (act one and act two), complex cluelessness obtains when:
    • There are reasons to think that the effects of act one would systematically tend to be substantially better than those of act two;
    • There are reasons to think that the effects of act two would systematically tend to be substantially better than those of act one;
    • It is unclear how to weigh up these reasons against one another.
  • For example, there are some reasons to think that the long-term effects of a marginally higher economic growth rate would be good—for example, via driving more patient and pro-social attitudes. This would mean that taking action to increase economic growth could have much better effects than not taking the action. We have some reasons to think that the long-term effects of a marginally higher economic growth rate would be bad —for example, via increased carbon emissions leading to climate change. This would mean that not taking the action that increases economic growth could be a much better idea. It is not immediately obvious that one of these is better than the other, but we also cannot say they have equal expected value. That would need either evidential symmetry, or a very detailed expected value estimate.
  • Some authors claim that complex cluelessness implies that we should be very skeptical of interventions whose claim to cost-effectiveness is through their direct, proximate effects. As Benjamin Todd and others have argued, the long-term effects of these actions probably dominate.[2] But we do not know what the long-term effects of many interventions are or just how good or bad they will be.
  • Actions we take today have indirect long-term effects, and they seem to dominate over the direct near-term effects. In the absence of evidential symmetry, these long-term effects cannot be ignored. So it seems that those concerned about future generations have to justify interventions via their long-term effects, rather than their proximate ones.

Greaves 2020, Evidence, cluelessness, and the long term (conference talk)

  • https://forum.effectivealtruism.org/posts/LdZcit8zX89rofZf3/evidence-cluelessness-and-the-long-term-hilary-greaves
  • OK, so there are all these unmeasured effects not included in our simple cost-effectiveness analysis. I want to make three observations about those unmeasured effects. Firstly, I’ll claim here (and I’ll say more about it in a minute), I claim that the unmeasured effects are almost certainly greater in aggregate than the measured effects. And I don’t just mean ex post this is likely to be the case; I mean that, according to reasonable credences even in terms of expected value, the unmeasured effects are likely to dominate the calculation, if you’re trying to calculate (even in expected terms) all of the effects of your intervention.
  • The second observation is that these further future (causally downstream or otherwise) events are much harder to estimate. In fact, they’re really hard to estimate; they’re much harder to estimate, anyway, than the near-term effects. That’s because, for example, you can’t do a randomised controlled trial to ascertain what the effect of your intervention is going to be in 100 years. You don’t have that long to wait.
  • The third observation is that even these further future and relatively unforeseeable effects, in principle, matter from an altruistic point of view, just as much as the near-term effects.
  • What do we get when we put all those three observations together? Well, what I get is a deep seated worry about the extent to which it really makes sense to be guided by cost-effectiveness analyses of the kinds that are provided by meta-charities like GiveWell. If what we have is a cost-effectiveness analysis that focuses on a tiny part of the thing we care about, and if we basically know that the real calculation - the one we actually care about - is going to be swamped by this further future stuff that hasn’t been included in the cost-effectiveness analysis; how confident should we be really that the cost-effectiveness analysis we’ve got is any decent guide at all to how we should be spending our money? That’s the worry that I call ‘cluelessness’. We might feel clueless about how to spend money even after reading GiveWell’s website.
  • Response one: Make the analysis more sophisticated
    • Okay, so the take home point from this slide is: sure, you can try and make your cost-effectiveness analysis more sophisticated and that’s a good thing to do - I very much applaud it - but, it’s not going to solve the problem I’m worrying about at the moment. So, that’s the response I want to set aside. Let me tell you about the other four.
  • Response two: Give up the effective altruist enterprise
    • My own tentative view, and certainly my hope, is that this isn’t the right response. But for the rest of the talk, I’ll set that aside.
  • Response three: Make bolder estimates
    • What other responses might there be? The third response is to make bolder estimates. This picks up on the thread left hanging by that first response. The first response was: make the cost-effectiveness analysis a little bit more sophisticated. In this third response - making bolder estimates - the idea is: let’s do the uber-analysis that really includes everything we care about down to the end of time.
    • Well, I think there are probably some people in the effective altruist community who are comfortable with doing that. But for my own part, I want to confess to some profound discomfort. To bring out why I feel that discomfort, I think it’s helpful to think about both intra-personal (so, inside my own head) issues that I face when I contemplate doing this analysis and also about inter-personal issues
    • The intra-personal issue is this: Okay, so I tried doing this uber-analysis; I come up with my best guess about the sign of the effect on future population and so forth; and I put that into my analysis. Suppose the result is I think funding bed nets is robustly good because it robustly increases future population size, and that in turn is robustly good.
    • Suppose that’s my personal uber-analysis. I’m not going to be able to shake the feeling that when I wrote down that particular uber-analysis, I had to make some really arbitrary decisions.
  • Response four: Ignore things that we can’t even estimate
    • So, if you like, we should look under the lamppost and ignore the darkness just because we can’t see into the darkness. So, again, perhaps like the second response, this is one that I understand. I don’t think it’s right. I do think it’s very tempting, though. And for the purpose of this talk, I just want to lay it out there as an option.
  • Response five: “Go longtermist”
    • Considerations of cluelessness are often taken to be an objection to longtermism because, of course, it’s very hard to know what’s going to beneficially influence the course of the very far future on timescales of centuries and millennia. Again, we still have the point that we can’t do randomised controlled trials on those timescales.
    • Perhaps we could find some other interventions for which that’s the case to a much lesser extent. If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what we’re doing is beneficial and of how beneficial it is? I think the answer is yes.

Comments on that post:

  • MSJ
    • I often can’t tell longtermist interventions apart from Play Pumps or Scared Straight (an intervention that actually backfired). At least for these two interventions, we measured outcomes of interest and found they they didn’t work or were actively harmful. By the nature of many proposed longtermist interventions, we often can’t get good enough feedback to know we’re doing more good than harm or much of anything at all.
    • Many specific proposed longtermist interventions don’t look robustly good to me, either (i.e. their expected value is either negative or it’s a case of complex cluelessness, and I don’t know the sign). Some of this may be due to my asymmetric population ethics. If you aren’t sure about your population ethics, check out the conclusion in this paper (although you might need to read some more or watch the talk for definitions), which indicates quite a lot of sensitivity to population ethics.
    • I’m not convinced that we can ever identify robustly positive longtermist interventions, essentially due to 1, or that what I could do would actually support robustly positive longtermist interventions according to my views (or views I’d endorse upon reflection). GPI’s research is insightful, impressive and has been useful to me, but I don’t know that supporting it further is robustly positive, since I am not the only one who can benefit from it, and others may use it to pursue interventions that aren’t robustly positive to me.

Greaves 2016, Cluelessness

  • https://users.ox.ac.uk/~mert2255/papers/cluelessness.pdf
  • For any given action, however, the majority of its consequences are unpredictable at the time of decision.
  • In this paper, I distinguish between ‘simple’ and ‘complex’ possible sources of cluelessness. In terms of this taxonomy, the majority of the existing literature on cluelessness focusses on the simple sources. I argue, contra James Lenman in particular, that these would-be sources of cluelessness are unproblematic, on the grounds that indifference-based reasoning is far less problematic than Lenman (along with many others) supposes.
  • However, there does seem to be a genuine phenomenon of cluelessness associated with the ‘complex’ sources; here, indifference-based reasoning is inapplicable by anyone’s lights. This ‘complex problem of cluelessness’ is vivid and pressing, in particular, in the context of Effective Altruism.
  • (CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
  • For if (CWo) is correct, and if in addition (as consequentialism holds) the moral status of an action is determined entirely by how it compares to alternative actions in terms of the goodness of its consequences, it seems to follow with particular clarity that we can never have even the faintest idea what the moral status of any given action is.
  • The argument for (CWo) stems from the observation that the relevant consequences include all consequences of the actions in question, throughout all time. In attempting actually to take consequences into account in practice, we usually focus on those effects – let us call them ‘foreseeable’ effects – that we take ourselves to be able to foresee with a reasonable degree of confidence. (These may or may not be any intuitive sense ‘direct’ effects, and may or may not be close to the point of action in time and/or space.) And while we are arguably correct in thinking that we are justified in being reasonably confident in our predictions of these effects, any choice of one act A1 over another A2 inevitably has countless additional consequences that our calculation takes no account of. A butterfly flapping its wings in Texas may cause a hurricane in Bangladesh; so too may my telling a white lie, refraining from telling that lie, moving or not moving my hand; a hurricane will certainly affect which other butterflies flap their wings or which other agents move their hands in which ways; and so the effects will ripple down the millennia.
  • Any conclusion, on the basis of the calculations that we have carried out, that one act is indeed objectively better another is justified only insofar as we are justified in assuming (NRo (Non-reversal for objective c-betterness)) The net effect of taking into account all of these additional effects would not reverse the judgment that we reach based on the foreseeable effects alone. But is (NRo) true? Here are two bad arguments for (NRo).
  • The ‘ripples on a pond’ postulate.
  • The ‘ripples on a pond’ postulate, though, is not plausible. To see this most vividly, note that even our most trivial actions are very likely to have unforeseen identity-affecting effects (although the same points could be made without appeal to identity-affectingness).
  • Nor is it at all likely that the number of identities my action affects in generation r will decrease as r increases; on the contrary, it will increase.
  • The cancellation postulate. Might one resurrect (NRo) by arguing that although there are, for any choice of a given action A1 over an alternative A2, countless effects of significant size stretching arbitrarily far into the future, that nonetheless these unforeseeable effects are highly likely to cancel
  • ne another out, and to do so to an arbitrarily high degree of precision as the time horizon stretches to infinity? If so, then their combined effect will be much smaller than the foreseeable effect, even if the effect of any individual unforeseeable consequence is comparable to that of the foreseen consequences. Call the postulate that these conditions do indeed obtain the cancellation postulate.
  • Unfortunately, the cancellation postulate is false. The theory of random walks tells us that while some degree of cancelling-out in such situations is all but certain, the combined effect of a large number n of probabilistically independent steps tends to grow with n, and in particular that it is highly unlikely to end up anywhere sufficiently close to zero.5 This result is, on reflection, intuitively extremely plausible: the observation is that it is extremely unlikely, for instance, that the difference in net value between everything this child does in his/her life on the one hand and everything the alternative child would have done in his/her life on the other will just happen to be smaller than the intrinsic value of one old lady’s receiving help across the road on one occasion, even if we pretend that each of a child’s actions is probabilistically independent of each of the same child’s other actions; and increasing the number of children involved will only exacerbate the problem.
  • The truth of (CWo) would be troubling, however, only if it followed that there was no way for considerations of consequences to guide either decisions or evaluations; and (OB) is not the only possible route for that to happen. In fact, consequentialists in particular have long recognised both the availability and the indispensability of a second such possible route, viz. the appeal to a relation of subjective c-betterness among actions
  • (NRs (Non-reversal for subjective c-betterness)) The net effect of taking into account unforeseeable effects would not reverse judgments of subjective c-betterness that we reach based on the foreseeable effects alone.
  • (SB: Criterion of subjective c-betterness) Act A1 is subjectively c-better than A2 iff the expected value of the consequences of A1 is higher than the expected value of the consequences of A2 (where both expectation values are taken with respect to the agent’s credences at the time of decision7 ).
  • But, in contrast to the objective non-reversal condition (NRo) discussed in section 1, we can defend its subjective analog (NRs), at least for the sorts of ‘unforeseeable effects’ we have been considering thus far. For consider any possible but unforeseeable future effect9 E1↦E2 that might, via the sorts of mechanisms we considered in section 1, result from my decision to perform act A1 rather than A2. For sure, it is possible that: if I did A1 then E1 would result and if I did A2 then E2 will result (in symbols: A1→E1 & A2→E2). Still, there is no particular reason to think that the correlations between my possible actions and these unforeseeable effects will be that way round, rather than the opposite (A1→E2 & A2→E1). It seems plausible, in that case, that given any credence function that it is rationally permissible for me to have at the time of decision, my credence in the second correlation hypothesis is exactly equal to my credence in the first correlation hypothesis. But if this is true for all unforeseeable possible effects E1↦E2, then the contribution of those unforeseeable effects to the difference in the expected values of A1 and A2 is precisely zero, and we have the following result: (EVF) The expected value of an action is determined entirely via its foreseeable effects. But (EVF) entails (NRs). Thus there can be no analogue of the cluelessness worry for subjective cbetterness.
  • There are, then, some ‘good cases’: cases in which some form of indifference reasoning generates rational constraints on credences, and we are in a position to recognise these cases as such, notwithstanding the fact that we do not (yet?) know precisely what form of indifference reasoning it is that does the generating. It is equally clear – intuitively – that the case in hand is just such a ‘good case’. While there are countless possible causal stories about how helping an old lady across the road might lead to (for instance) the existence of an additional murderous dictator in the 22nd century, any such story will have a precise counterpart, precisely as plausible as the original, according to which refraining from helping the old lady turns out to have the consequence in question; and it is intuitively clear that one ought to have equal credences in such precisecounterpart possible stories. And the failure (and paradoxical nature) of a completely general Principle of Indifference provides no grounds for doubting this intuitive verdict.
  • There are, however, cases that threaten cluelessness in a structurally very different way, and that fall outside the scope of any even remotely plausible form of POI. I will refer to the existence of these cases, and the problem that they arguably pose for anyone who seeks to guide their actions even partially by considerations of goodness of consequences, as the ‘Complex Problem of Cluelessness’. The remainder of the paper is much more tentative than sections 1-4; its purpose is more to raise than to resolve a problem.
  • The cases in question have the following structure: For some pair of actions of interest A1, A2, (CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2; (CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1; (CC3) It is unclear how to weigh up these reasons against one another.
  • But, callous as it may sound, the hypothesis that (overpopulation is a sufficiently real and serious problem that) the knock-on effects of averting child deaths are negative and larger in magnitude than the direct (positive) effects cannot be entirely discounted. Nor (on the other hand) can we be confident that this hypothesis is true. And, in contrast to the ‘simple problem of cluelessness’, this is not for the bare reason that it is possible both that the hypothesis in question is true, and that it is false; rather, it is because there are complex and reasonable arguments on both sides, and it is radically unclear how these arguments should in the end be weighed against one another. However, in this case – unlike the ‘simple problem cases’ – this appearance of symmetry disappears as soon as we probe to a deeper level. There is an obvious and natural symmetry between the thoughts that (i) it’s possible that moving my hand to the left might disturb air molecules in a way that sets off a chain reaction leading to an additional hurricane in Bangladesh, which in turn renders many people homeless, which in turn sparks a political uprising, which in turn leads to widespread and beneficial democratic reforms… and (ii) it’s possible that refraining from moving my hand to the left has all those effects. But there is no such natural symmetry between, for instance, the arguments for the claim that the world is overpopulated and those for the claim that it’s underpopulated, or between the arguments for and against the claim that the direct health benefits of effective altruists’ interventions in the end outweigh any disadvantages that accrue via diminished political activity on the part of citizens in recipient countries. And, in contrast to the above relatively optimistic verdict on the Principle of Indifference, clearly there is no remotely plausible epistemic principle mandating equal credences in p and not-p whenever arguments for vs. against p are inconclusive. There is a deep sense of ‘decision discomfort’ attending the predicament of being forced to make decisions in situations of the character we are now discussing.
  • An individual’s decision as to which degree course to sign up for, which job to accept, whether or not to have children, how much to spend on clothes, whether or not to give up caffeine.25 In these cases, no less than the effective-altruist examples discussed above, (a) there are good consequence-based reasons/arguments for favouring each of two alternative actions and also (b) there is no obviously canonical way of weighing up those reasons or arguments against one another. It follows that insofar as the source of cluelessness is the satisfaction of conditions (CC1)-(CC3), one should feel clueless in these everyday cases no less than in the effective-altruist cases.

Rozendal 2018, The Problem of Complex Cluelessness: what is it and what can we do about it?

  • http://www.sieberozendal.com/wp-content/uploads/2019/10/The-Problem-of-Complex-Cluelessness-Siebe-Rozendal.pdf
  • I conclude that ‘complex cluelessness’ is under-described, and the problem (or set of problems) needs to be disentangled before we can make much progress on it. I suggest some questions that seem important to address and offer different options of how to address them.
  • I believe that there are some recommendations that can be made for cases of complex cluelessness, although they are very tentative. First off, try to not bring about consequences that are irreversible. Although this may be hard to do, avoiding human extinction or Orwellian scenarios that are hard to unilaterally escape from are good strategies to strive for. Second, we should develop methods to not overlook valuable strategies. This requires both rigor and creativity, two qualities that maybe have not been simultaneously employed enough, because they are so opposite from each other.
  • If we are clueless in every decision situation, it appears the only thing with any expected value is more research. It would be worthwhile to find out, amongst others, how the expected value of possible total consequences (𝐸𝑉(𝐶)) is distributed. Is it a Gaussian or a Pareto distribution, and with which properties (e.g. mean, variance, and for Pareto: which alpha)? Answering this question would affect our decision procedure strongly. Another worthwhile research direction is reducing our moral uncertainty (specifically axiological) and improving our methods to deal with moral uncertainty. However, I cannot rule out the possibility that we are clueless about whether this research brings about good consequences. Maybe we really are clueless even about whether trying to resolve complex cluelessness brings about good consequences.
  • The potential existence of omnipresent cluelessness should not keep one from acting morally. Maybe we are not actually clueless and should just follow common sense. But if we are clueless, it is better to attempt to act morally, so that we can evaluate whether our strategies have worked once we have the tools necessary for good evaluation. Furthermore, the strategy that seems most likely to bring about good consequences is promoting moral behavior, altruism, and value as such (Williams, 2013). This seems especially valuable when behavior is promoted that tracks intellectual and moral progress such that new insights will be incorporated.
  • All I can say is that we are far away from resolving the problem of complex cluelessness.
  • The potential existence of omnipresent cluelessness should not keep one from acting morally. Maybe we are not actually clueless and should just follow common sense. But if we are clueless, it is better to attempt to act morally, so that we can evaluate whether our strategies have worked once we have the tools necessary for good evaluation. Furthermore, the strategy that seems most likely to bring about good consequences is promoting moral behavior, altruism, and value as such (Williams, 2013).

Herlitz 2019, Cluelessness and rational choice: the case of effective altruism

  • https://www.researchgate.net/profile/Anders-Herlitz/publication/337442483_Cluelessness_and_rational_choice_the_case_of_effective_altruism/links/5dd7f29b92851c1feda72e05/Cluelessness-and-rational-choice-the-case-of-effective-altruism.pdf?origin=journalDetail&_tp=eyJwYWdlIjoiam91cm5hbERldGFpbCJ9
  • this paper is quite elegant and the argument is very compelling to me - This paper presents an approach to how to make rational choices in face of cluelessness, focusing on effective altruism. First, it is illustrated how effective altruism faces the challenge of cluelessness, which implies a particular kind of incompleteness which in the paper is called practical incompleteness. Second, it is argued that this is not a reason for proponents of effective altruists to become skeptics, but rather that they ought to adjust their views and accept that these are only able to partially determine what they ought to do
  • First, one way for effective altruists to respond to the challenge of cluelessness is to dismiss the relevance of complex cluelessness and claim that the instances of cluelessness that they face are not instances of complex cluelessness, but rather simple cluelessness. Such a response would go something like this: “Of course there are unforeseeable consequences of our actions. Whenever we act, we might trigger a chain of events that might lead to World War III, to some family tragedy, or to the birth of some genius that will find a cure for cancer. Likewise, foreign aid might lead to a dictator consolidating his power or prevent economic growth. But every omission to act is as plausible to have the same effects, so we have no reason to care about these unforeseeable consequences when we determine what we ought to do.” This seems to be the way a lot of effective altruism organizations de facto deal with the challenge of cluelessness. When effective altruism organizations, like GiveWell and 80,000 Hours, estimate the good consequences of certain choices, they systematically ignore a large number of unforeseeable consequences and thereby treat a large number of uncertainties as if they are instances of simple cluelessness.
  • As popular as this strategy might be, it is not reasonable to completely dismiss the relevance of complex cluelessness like this. It is unreasonable to think that it is as plausible that certain negative effects will arise in case aid is given to some population as in case it is not
  • Third, consider how an effective altruist might respond to the challenge of cluelessness by accepting that complex cluelessness is a real problem, but argue that it is still reasonable to treat all cases as if they are cases of simple cluelessness because that leads to better outcomes on the whole.
  • This would be a peculiar response. An effective altruist that made this claim would of course be correct in saying that if it maximizes the good to systematically treat instances of complex cluelessness as instances of simple cluelessness, this would be justified on their general view. However, this is an empirical question for which we do not have the answer.
  • Rather than dismissing the relevance of complex cluelessness, I believe that effective altruists should accept that this constitutes a real problem. They should accept that complex cluelessness reveals that their theory is practically incomplete in the sense that it fails to fully determine what they ought to do in every situation. Some effective altruists might hesitate to accept that conclusion because it might appear that this means their theory is not practical. Yet, as I will illustrate in the next section, there is no reason to take incompleteness to be a reason for skepticism; incomplete normative theories can also be used to guide decision making. The fact that complex cluelessness poses problems in the charity case in no way undermines the firm verdict that it is better to donate money to disaster relief than to an already rich football club. Accepting practical incompleteness rather means that effective altruists ought to specify their effectiveness commitment to accommodate this and address the issue of how to choose between alternatives that cannot be ranked
  • In this section, I will argue that rather than taking practical incompleteness to be a reason to be skeptical of these approaches and resort to inactivity, it should be seen as a reason to accept decision methods that partition choice sets into sets of permissible and impermissible alternatives. I present a decision method that can be used to make rational decisions when the normative theory is practically incomplete
  • Consider now how an effective altruist can use their view to partially determine what they ought to do also in cases that involve complex cluelessness. Imagine that the choice involves not only charity (a) and (b), but also charity (c), such that the following options are available: 1. Donate $1,000 to (a), a charity that provides basic healthcare services to some people that otherwise lack access to these services because they live in a very poor rural area of a country that is governed by an oppressive and corrupt regime that has no interest in providing healthcare to the population. 2. Donate $1,000 to (b), a charity that works against drug use among teenagers who grow up in poor areas of a major city in a rich and democratic country. 3. Donate $1,000 to (c), the local football club that is raising money to build a new stadium. For the sake of the argument, assume that the expected consequences of 2 can be determined to be better than the expected consequences of 3, while the unforeseeable consequences of 1 imply complex cluelessness, as discussed above. Even if one accepts that it is impossible to determinately rank all alternatives in this choice set with respect to what maximizes the expected good, it appears obvious that an effective altruist can determine that not all of these options are rational in light of their commitments. Option 3 is irrational because it is determinately worse than 2 regardless of what is true about the issues around which there is complex cluelessness. If 1 is not chosen, it is obvious that an effective altruist should choose 2 in light of their commitment to altruism, effectiveness and the use of epistemically reliable sources
  • What decision rule can be used in order to reach this judgment? One decision rule that can be used with this result interprets maximization in terms of determinate maximimality: an option is maximal in case it is not worse than any alternative, and it is determinately maximal if it is not fully determined that it is worse than any alternative. In the example above, it 14 might thus be claimed that 1 and 2 are rational choices in light of the effective altruism commitments because they are both determinately maximal. 1 is not determinately worse than 2 or 3, and 2 is not determinately worse than 1 or 3. 3 is, however, not a rational choice, since it is determinately worse than 2 and thereby not determinately maximal. If one interprets maximality in this way and requires determinacy, it appears that a decision method which replaces optimization with maximization understood as determinate maximality will be able to partition the set of alternatives into one set of determinately maximal and therefore permissible alternatives, and a different set of not determinately maximal and therefore impermissible alternatives. This allows one to use the commitments of effective altruism to partially determine what one ought to do even if the view is incomplete.
  • Associating maximization with determinate maximality in order to deal with complex cluelessness is a way to accept that the appropriate decision method is not an optimization method, but a method that fundamentally partitions the choice set into sets of permissible and impermissible alternatives. The permissible alternatives need not be equally good, and it is possible that no alternative can be identified as the alternative that will, determinately, do the most expected good, but one will be able to discard some alternatives as irrational. Effective altruists can make this move to use their incomplete view. My favored approach to this problem is the following. Those who embrace normative theories with these implications can complement their theories with other normative views that impose precision so that the problem above is avoided. Nothing confines effective altruists to only act on effective altruism. An effective altruist can remain committed to the effective altruism commitments but also embrace the view that when these commitments fail to fully determine what to do due to complex cluelessness, they should function merely as side-constraints on what she ought to do, and within these constraints she should rank all alternatives according to something else.
  • What is this “something” that can be reasonably used in order to determine rankings of alternatives that cannot be ranked with respect to the commitment to effective altruism? Various approaches to this issue solve the problem outlined above. For instance, effective altruists can avoid forming sequences of choices that are determinately worse than alternative sequences by complementing their view with a commitment that reflects a general aversion to alternatives that might cause harm which only ranks alternatives that cannot be ranked by the commitment to effective altruism
  • I want to suggest a different approach. I believe it is reasonable for effective altruists to see complex cluelessness as an opportunity to allow for a certain amount of agent-centered partiality (cf. Boesch 2017). Following Chang, I believe it is reasonable for effective altruists to rely on their own agency and volition in order to rank alternatives that cannot be ranked with reference to the commitment to effective altruism. Instead of suggesting that all effective altruists ought to be averse to the possibility of causing harm, this approach allows effective altruists to let their agency and personal preferences, passions and commitments guide their actions when some alternatives cannot be determinately ranked due to complex cluelessness. What matters for John when he chooses between (a) and (b) ought thus to be who he is as a person and what his personal preferences, passions and commitments are. What causes are John committed to? What matters most to John? Is he as an agent more invested in projects that help people with drug problems, or is he more personally invested in global development?

Mogensen 2019, Maximal cluelessness

  • https://philarchive.org/archive/MOGMCS
  • the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule
  • My aim is not to establish that some particular decision criterion is uniquely correct, but merely to exhibit one such criterion as sufficiently plausible that it cannot be ruled out: namely, the maximality rule. I introduce the maximality rule in section 3.1. In section 3.2, I consider some of its drawbacks and note alternative decision criteria that avoid these drawbacks. I argue that these alternatives face other problems, which may lead us to prefer the maximality rule on balance.
  • In order to arrive at a statement of the maximality rule, I begin by describing a general framework for evaluating decision criteria for imprecise credences.
  • We may consider the task of constructing a criterion of rational decision making as involving specification of a strict preference relation, , defined over the set of available acts, , to which we associate an induced choice correspondence, ሺ ሻ, consisting of all acts that are not dispreferred to some alternative: ሺ ሻ ൌ ሼ ᇱ ᇱ ሽ. Any act within the set defined by the choice correspondence is considered rationally permissible with respect to its alternatives. Any act outside the set is considered rationally impermissible with respect to its alternatives.
  • The foregoing discussion highlights a key attraction of the maximality rule. As Bradley and Steele (2015) put it, the maximality rule ³does not contrive a preference between incommensurable options where there is none´ (15).
  • there exist competing decision criteria that are superior to the maximality rule in some respect or other, but I do not know of any alternative decision criterion that is all-thingsconsidered preferable. I submit that we cannot rule out the maximality rule. As a result, we ought to avoid drawing any conclusions that are inconsistent with it. In this section, I argue that an agent whose utility function is a positive linear transform of impartial good will not prefer donating to Against Malaria Foundation over Make-A-Wish Foundation if she responds to cluelessness with imprecision and satisfies the maximality rule, provided that she shares our evidence
  • Given apparently plausible assumptions, an agent whose sole concern is to maximize the good, impartially considered, need not prefer donating to Against Malaria Foundation over Make-A-Wish Foundation. Nor need she prefer donating to Make-A-Wish Foundation. More generally, I will argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. My objection does not rest on doubts about the details of particular cost-effectiveness assessments. It derives instead from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty: the so-called maximality rule.
  • I cannot make any claims to great originality for this paper. My conclusions are ultimately not very different from those of Lenman (2000). The arguments by which I arrive at these conclusions represent incremental extensions of ideas discussed by Greaves (2016). Nonetheless, incremental progress is progress, and I hope this paper will provide readers with renewed appreciation of the challenge posed to effective altruist cause prioritization by the overwhelming importance and inscrutability of the indirect effects of our actions. My aim is not to suggest that this challenge cannot be met, but to make sure that we face up to it
  • In comparing Make-A-Wish Foundation unfavourably to Against Malaria Foundation, Singer (2015) observes that ³saving a life is better than making a wish come true.´ (6) Arguably, there is a qualifier missing from this statement: µall else being equal.¶ Saving a child¶s life need not be better than fulfilling a child¶s wish if the indirect effects of saving the child¶s life are worse than those of fulfilling the wish. We have already touched on some of the potential negative indirect effects associated with the mass distribution of insecticide-treated anti-malarial bed-nets in section 2.2, but they are worth revisiting in order to make clear the depth of our uncertainty.
  • For the reasons just noted, a sensible comparison between Make-A-Wish Foundation and Against Malaria Foundation in respect of promoting the impartial good cannot rest on the observation that saving a child¶s life is better than fulfilling a child¶s wish. Relative to any reasonable probability function, very little of the difference in expected value between these acts turns on effects of this kind. It is determined principally by possible long-term impacts. These long-term impacts are very hard to probabilify with even moderate precision, whereas even small differences in the probability of persistent, large-scale events such as human extinction will decisively tip the balance when comparing the expected moral value of these alternatives.
  • our evidence concerning the total impact of our choice between these organizations is incomplete, imphatrecise, and equivocal. Moreover, it was intended to render plausible the view that the evidence is sufficiently ambiguous that the probability values assigned by the functions in the representor of a rational agent to the various hypotheses that impact on the long-run impact of her donations ought to be sufficiently spread out that some probability function in her representor assigns greater expected moral value to donating to the Make-A-Wish Foundation. Therefore, an agent whose utility function is a linear transform of moral value but who responds to cluelessness with imprecision and obeys the maximality rule will not strictly prefer donating to Against Malaria Foundation.
  • I have argued that we know much less about what it would mean to rationally promote the impartial good than we think we do. In particular, I have argued that a rational agent who is impartially beneficent need not prefer donating to Against Malaria Foundation rather than Make-A-Wish Foundation if she obeys the maximality rule. I have offered reasons to expect that this conclusion will generalize to many similar cause comparisons. I do not insist that the maximality rule is correct. I merely claim that it is sufficiently plausible that we cannot rule it out. For all we know, orthodox effective altruist conclusions about cause prioritization are all true. In fact, I am inclined to believe they are. The problem is that I do not know how to set out and argue for a decision theory that is consistent with a long-termist perspective and supports these conclusions without downplaying the depth of our uncertainty

Trammell 2019, Simplifying Cluelessness

  • https://philiptrammell.com/static/simplifying_cluelessness.pdf
  • Given the radical uncertainty associated with the long-run consequences of our actions, consequentialists are sometimes “clueless”. Informally, this is the position of having no idea whatsoever what to do. In particular, it is not the position of facing actions that merely take on wide distributions of possible value. Existing efforts to formalize cluelessness generally frame the phenomenon as a consequence of having imprecise credences. Even if some such framing is ultimately correct, however, it appears, at the moment, not to be particularly effective at communicating the seriousness of the problem clueless agents face.
  • I do not hope to provide an accurate account of the phenomenon in full detail, but only to convince the reader that there is a real and important fact of consequentialist life which the tools of orthodox epistemology and decision theory cannot handle.
  • If something like the above is correct, we are all always clueless with respect to almost all actpairs. We are clueless, presumably, among most of the commodity-bundles we could buy every time we enter the grocery store, let alone between arbitrary possible act-pairs, like causing three inches of extra rain on the North Pole and moving Andromeda three inches closer to the Earth. Why, then, do we typically not notice our pervasive cluelessness? Why is it so rare to see concern for the phenomenon of cluelessness, or calls for an action-guiding decision theory in contexts of cluelessness, outside conversations about consequentialist ethics? Why is it so common in the Effective Altruism movement in particular? And as clueless consequentialists in 2019, how long shall we ponder?
  • We only notice that we are in contexts of cluelessness, and we only feel the need for normative guidance in navigating contexts of cluelessness, when we find ourselves actively pondering a large and diverse maximal-subjective-choiceworthiness bucket for a long time.
  • As for how long we’ll ponder: who knows? A pessimistic possibility is that the evidence bearing on our actions’ long-term consequences is so complex, and our reasoning tools are so limited, that we’ll have to ponder on a cosmic timescale. But a more optimistic possibility, to which I am more sympathetic than I once was, is that the long experience of cluelessness a modern consequentialist faces is due primarily not to the hopeless complexity of his decision problem, but just to the fact that he recently found that he had to decide among options he had long relegated to a large, low-subjective-choiceworthiness bucket. The observation that consequentialists must optimize for long-term impact comes sudden and jarring, like an announcement that we have to walk home from the grocery store with something very different from what we went in to buy. But eventually, this story goes, we can change focus and narrow down the vast space of available acts in a different way. It will still take a while, since there are quite a lot of options and the problem is quite difficult, but this period of “cluelessness” (rather than the mere wide uncertainty) will not last as long as it first threatens to. Perhaps its end is already near. The top buckets, where our options are finely partitioned, will soon come to consist of (say) individual research projects to consider funding, rather than GiveWell top charities. We will remain clueless about the GiveWell charities, as we have always been about almost everything, but this will no longer be unsettling. In some minimal sense, at least, we will know what to do.

Lok Lam Yim 2019, The Cluelessness Objection Revisited

  • https://academic.oup.com/aristotelian/article-abstract/119/3/321/5572135
  • One persistent problem for consequentialism is the impossibility of predicting the future. Lenman (2000) elevates this objection to a new level: not only do we fail to be absolutely certain about the consequences of our action, but we are also almost entirely clueless. If Lenman’s objection holds true, consequentialism loses all its capacity to guide action, and becomes of merely theoretical interest. Greaves (2016) confronts the cluelessness objection. She distinguishes between ‘simple’ and ‘complex’ sources of cluelessness, and argues that the principle of indifference applies to ‘simple’ cases of cluelessness, thereby rescuing ‘simple’ cases from the cluelessness objection.1 In this paper, I argue that a so-called ‘simple’ case often collapses into a ‘complex’ case. This means that the cluelessness objection has a much broader scope of applicability than Greaves believes.
  • a so-called ‘simple’ decision (such as whether to help an old lady to cross the road) can systematically lead to consequences of a ‘complex’ nature (such as an increase in the possibility of their grandchildren joining the effective altruism movement), thereby suffering from the same problem of genuine cluelessness as a ‘complex’ case.

Yew-Kwang Ng 2020, Effective altruism despite the second-best challenge: Should indirect effects Be taken into account for policies for a better future?

  • If the indirect effects are positive, they enhance the justification for the altruistic act; if negative, they may make an apparently effective altruistic act non-effective, or even counter-altruistic. Thus, this question should be very important for effective altruists. This question has not been addressed in either the welfare economics or the effective altruism literature, partly due to the infancy of the latter.2 Despite the apparently nihilistic implications of the second-best theory, this paper shows that the government or effective altruists may increase at least the expected welfare by focusing on areas of serious inadequate optimization, taking into account the indirect effects if information allows. This is based on the third-best theory that takes into account also administrative and informational costs and advocates taking account of some interrelationships only, especially those on which we have information and/or are more important in their effects.
  • The real world is never second-best in this sense. Rather, it is almost always a third-best world, which is defined by the existence of some second-best constraints, plus the existence of administrative and informational costs. In this real world of the third best, what policies/rules should we follow? The theory of second best itself seems to suggest an impossibility: Either we go to the summit of first best (which is impossible in the presence of second-best constraints) or the summit of second best (which is impossible in the presence of information and administrative costs), or we do not know what is optimal. In the terminology of second best, piecemeal welfare policies are impossible or undesirable. In an atmosphere of this impossibility, a theory of third best has been provided to guide public policy (Ng, 1977; reprinted as Ng, 2017a). What policies should be followed depend much on the available amount of information and the administrative costs. The real world has many areas far from being fully optimized (excessive poverty/inequality, inadequate environmental protection, excessive animal suffering, to mention a few important ones). No altruist is optimistic enough to believe that we can achieve full optimization all around, or eliminate all departures soon. Then, the second-best theory suggests that we do not know whether and how partial improvements are possible. In its terminology, piecemeal welfare policies cannot be relied upon to improve overall social welfare. Does this mean that neither the government nor the altruists may make improvements? If we accept the second-best theory fully by its face value, yes. However, the theory of third best suggests otherwise.
  • It is true that, increasing the incomes of the poor may make them eat more chicken and possibly lead to an overall decline in welfare. However, it is also possible that it makes them shift from chicken to beef. It is likely that while factory-farmed chicken suffer from negative welfare, cattle roaming in the field have positive net welfare (Norwood & Lusk, 2011, pp. 227−9). (There may also be different effects on the environment, ignored here for simplicity.) Thus, it is also possible that reducing poverty, apart from making improvements on that front, may have indirect beneficial effects in the area of animal welfare. The third-best theory suggests that we should take into account the relevant important effects, consider their balance, and make adjustments according to the available information. If we do not have sufficient information to suggest that reducing poverty has net positive or negative effects on other areas, we should still proceed to reduce poverty.
  • We may summarize our discussion so far (and also that in the appendix) into the following proposition. Proposition 1 : (a) Possibilities for altruism: When not all choice variables are at levels that already maximize overall welfare subject to feasibility (almost always true), there exist scopes for altruists to increase welfare further by taking certain actions, e.g. helping the poor when there is still excessive poverty/inequality. (b) Second best: Unless all departures from overall welfare maximization are eliminated to achieve the maximum feasible overall welfare (first best), altruistic measures to make improvements in some areas may actually reduce overall welfare. (c) Importance of taking into account indirect effects: It is therefore important to take into account not only the direct effects of altruistic acts but also the indirect effects, especially those effects on areas of serious inadequate optimization, like environmental disruption and animal suffering. (d) Third best: Even if neither the achievement of the overall first best (eliminating departures from welfare maximization in all areas) nor the satisfaction of the second best (taking account of all interrelationships and indirect effects) are possible due to imperfect information and administrative costs, such that a definite improvement may not be ensured (second-best impossibility of piece-meal welfare policies), the government and/or altruists may still do something to increase expected welfare by focusing on areas of major inadequacy from full optimization (like poverty, environmental protection, and animal welfare), taking into account both direct and indirect effects where we have the relevant information, or can obtain without prohibitive costs. In particular, when we do not have enough information to evaluate whether the indirect effects are positive or negative, we may proceed in accordance with the direct effects alone (adopting first-best rules in a third-best world with Informational Poverty). If we have some information to estimate the indirect effects (Informational Scarcity), we make some adjustments accordingly.

Thorstad and Mogensen 2020, Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making

  • I don’t find this paper very convincing. I think the argument is fine, but it depends on the intuitive verdicts made in section 5 (and the authors acknowledge this). I don’t have any reason to suspect that the verdicts are correct, given complex cluelessness.
  • https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Andreas-Mogensen-Heuristics-for-clueless-agents.pdf
  • In this paper, we aim to characterize and solve two problems raised by recent discussions of cluelessness, which we term the Problems of Decision Paralysis and the Problem of Decision-Making Demandingness. After reviewing and rejecting existing solutions to both problems, we argue that the way forward is to be found in the distinction between procedural and substantive rationality. Clueless agents have access to a variety of heuristic decision-making procedures which are often rational responses to the decision problems that they face. By simplifying or even ignoring information about potential long-term impacts, heuristics produce effective decisions without demanding too much of ordinary decision-makers.
  • From substantive to procedural rationality Questions about rational choice can be posed at two levels (Simon 1976). At the level of substantive rationality, we ask normative questions about the first-order options facing an agent, which in this case are base-level actions like sleeping in or donating money to a specific charity. Questions about substantive rationality concern what to do. At the level of procedural rationality, we raise normative questions about the process of decision-making. For example, we ask how agents ought to make up their minds about whether to get out of bed or about which charity to donate to. Questions about procedural rationality concern how to decide what to do.
  • Substantive and procedural rationality are distinguished by the objects they consider, as opposed to the questions raised about those objects. At each level we can ask the evaluative question of what the best option or decision procedure would be. We can ask the deontic question of what option or decision procedure agents ought to take. We can ask culpatory questions, such as which options or decision procedures agents can be blamed for taking. And we can ask aretaic questions, such as which options or decision procedures a virtuous agent would use
  • We take Sections 3-4 to suggest that deontic questions about substantive rationality are often intractable under conditions of cluelessness. But deontic questions about procedural rationality may be more amenable to study. The lesson of Section 5 is that we often have a reasonably good handle on the procedures that rational agents should use to make decisions of the kind that interest us. Moreover, we think that both our motivating problems are ultimately best understood as posed at the procedural level.
  • Both distinguishing features of heuristic decision-making are reflected in the examples from Section 5. Decisionmakers should sometimes partially or fully ignore information bearing on the long-term impacts of their actions. And decision-makers should consider fewer options, using sparser models of their relevant effects as the stakes decrease. This suggests that standard justifications for heuristic decisionmaking will shed light on the justification of decision procedures for longtermists.
  • There are three standard justifications for heuristic decision-making. The first invokes cognitive abilities: agents are not always capable of using more complicated Bayesian methods. In this paper, we will mostly be concerned with two further arguments. The second invokes accuracy-effort tradeoffs: processing a larger amount of information more completely often increases decision quality at the expense of cognitive and physical effort (Johnson and Payne 1985). Heuristics typically strike the best balance between decision quality and decision costs. The third argument invokes less-is-more effects: sometimes processing more information more fully predictably decreases decision quality (Gigerenzer and Brighton 2009, Wheeler forthcoming).
  • Summing up, decisionmakers can sometimes make better predictions and decisions by employing simple decision rules due to the bias-variance dilemma. Simpler rules prevent overfitting by keeping variance manageable, thereby reducing predictive error. In the next section, we put this insight together with the accuracyeffort tradeoff in order to shed light on procedurally rational longtermist decisionmaking and solve our motivating problems.
  • Often we cannot make substantially better predictions or decisions because we are not in a good position to predict the long-term effects of, for example, getting out of bed early.
  • Also relevant are the stakes of decision-making: how important is it to make a high-quality decision? While it is quite important to make the best possible use of a billion-dollar charitable endowment, it is less important to ensure that five-hundred dollars are used as well as possible. These factors together favor processes that de-emphasize long-term effects in order to reduce the costs of decision-making.
  • The examples in Section 5 suggest that these two sets of factors combine to determine the procedural rationality of longtermist decision-making. We are not so pessimistic as to assume that less is always more. We think that longtermist decisionmakers with substantial resources at their disposal can probably do better by constructing detailed models of the long-term effects of longtermist interventions. However, we think that as the stakes of decision-making decrease, accuracy-effort tradeoffs become more pressing. Even if the best long-term models reliably outperform simpler short- and medium-term models, rational decision-makers should often use simpler models and there is no reason to suspect that these models will be improved by taking quick and dirty shortcuts to account for long-term effects.

St Jules 2020, Hedging against deep and moral uncertainty

  • https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty
  • Like for quantified risk, we can sometimes hedge against deep uncertainty and moral uncertainty: we can sometimes choose a portfolio of interventions which looks good in expectation to all (or more) worldviews - empirical and ethical beliefs - we find plausible, even if each component intervention is plausibly harmful or not particularly good in expectation according to some plausible worldview. We can sometimes do better than nothing in expectation when this wasn’t possible by choosing a single intervention, and we can often improve the minimum expected value. I think doing so can therefore sometimes reduce complex cluelessness.

Wilkinson 2020, Chaos, ad infinitum

  • https://philpapers.org/rec/WILCAI-11
  • https://www.effectivealtruism.org/articles/hayden-wilkinson-doing-good-in-an-infinite-chaotic-world
  • But don’t get disheartened. This is a problem based on “objective betterness.” As Greaves puts it, “The same worry doesn’t arrive for subjective betterness.” We can say that an action is subjectively better if, given our uncertainty and the probabilities of different outcomes, it has a higher expected value.
  • The random variables have an expected value of zero, and they disappear. So it’s positive: saving five people is better. Hooray! We still have reason to save people. But we can only assign these expected values when all possible outcomes are comparable. We don’t know which outcome will turn out better, but we know that one of them will, and we can average out how good they are.
  • Here’s a second problem: Our universe could be infinite. Some leading theories of cosmology state that we face an infinite future containing infinite instances of every physical phenomenon, including those we care about, like happy human beings.
  • This makes it hard to compare outcomes. The total value in the world will be infinite or undefined no matter what we do; therefore, we can’t say that any outcome is better than another. Thankfully, a few methods have been proposed that uphold our finite judgments, even when the future is infinite. I’ll categorize these judgments as “strongly impartial views,” “weakly impartial views,” and “position-dependent views.” But all of them are problematic when the world’s chaotic. In summary, if you were to take an aggregated view of betterness in a finite world, you’d be clueless about which action will turn out best. But we can still say what’s better in expectation, since outcomes are at least comparable. But in infinite worlds, the problem comes back to bite us. Due to chaos, many views can’t say that any outcome is better than another. Those views include all strongly impartial views, all weakly impartial views that respect Pareto, and many (but not all) position-dependent views. We can still say that some outcomes are better and that we have corresponding reasons to act, but we have to hold a view that’s dependent on the positions of value or something even less plausible. That’s a strange conclusion. Either we care a bit about position, which seems morally irrelevant, or we accept that we have no reason to make the world better.

Shiller 2021, Chance and the Dissipation of our Acts’ Effects

  • https://www.tandfonline.com/doi/abs/10.1080/00048402.2020.1760326
  • given the assumption that chancy events are ubiquitous, the effects that our acts have are likely to dissipate over a short span of time. The sets of possible futures left open by alternative acts are typically very similar in the same way that large random samples drawn from the same population are typically very similar
  • (1) Very large sets of samples randomly drawn from a single population are very likely to have distributions of properties that very closely resemble those of the whole population. (2) The sets of futures left open by routine identity-affecting acts have distributions of socio-axiological properties that are as they would be if they were randomly drawn from the set of all previously open futures. (They depart from representativeness to the same degree and with the same frequency as randomly drawn samples do.) (3) If chancy events are ubiquitous, then, for any given act, the number of futures that it leaves open is very large. So, (4) Routine identity-affecting acts are likely to leave open sets of futures with distributions of socio-axiologically properties that very closely resemble the population of all previously open futures. (5) Routine identity-affecting acts only produce very small probability shifts on socio-axiological partitions. (6) Routine identity-affecting acts typically have consequences that quickly dissipate and are thus not massive.
  • On any way of carving up possible futures along axiological or sociological lines, there will be an unfathomably large number of distinct possible futures. However, given the ubiquity of chancy events, the number of different possible futures left open by any given act is tremendously larger. Any two large effectively random samples from the same set of possible futures will exhibit similar distributions of socio-axiological properties. If alternative acts manage to carve those futures that they leave open away from those that they foreclose in effectively random ways, then the sets of futures that result are almost sure to share nearly identical distributions of socio-axiological properties. If different acts produce very similar distributions of possible futures, then the differences between their effects will dissipate. Routine acts have modest foreseeable shortterm effects, and, although they do contribute to making some very fine-grained future possibilities more or less likely, they do not unpredictably shift the probabilities of the kinds of propositions about the far future about which we care.

van Capelleveen 2022, Effective Altruism and Decision-Making for the Clueless

  • Decision theorists have proposed numerous–generally quite complicated–imprecise decision principles, each of which specifies how one should rationally act if confronted with decisions modelled using imprecise credences (see, e.g., Elga 2010). Greaves (2016, 329; 333– 334) tentatively endorses one (‘Moderate’), while Herlitz (2019, 13–14) and Mogensen (2021, 146–151) tentatively endorse another (‘Maximality’). It stands to reason that one or other imprecise decision principle should replace MEV. It’s unclear, however, which it should be (Moderate? Maximality?). If this isn’t due to irrationality on our part, it must be possible to be rationally uncertain about which of these principles to conform to (§2).
    This means that it can be rational for effective altruists to be normatively uncertain, i.e., uncertain about what they ought to do, or, more narrowly, decision-theoretically uncertain, i.e., uncertain about what they ought rationally to do. There has been increasing interest in decisionmaking under normative or decision-theoretic uncertainty. One might hold that such uncertainty should be accounted for in decision-making, a position variously referred to as Metanormativism, Uncertaintism, or Normative Internalism.

Schwitzgebel 2023, Repetition and Value in an Infinite Universe

  • http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Infinitude-230502.pdf

Schwitzgebel 2023, The Washout Argument Against Longtermism

  • I think this actually isn’t about “washing out” in terms of unknown effects “washing out”. It’s just saying there’s a limit on the time horizons we should be considering. If anything, this reduces cluelessness.
  • http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/WashoutLongtermism-231227.pdf
  • My grounds are this: There are no practically available actions – nothing we can actually do now – that we are justified in believing will have a non-negligible positive influence on events more than a billion years from now, due to the many massively complex possible causal ramifications of any action
  • We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis. According to the Infinite Washout Argument, standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes. According to the Cluelessness Argument, we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years. According to the Negligibility Argument, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.
  • I will argue, on the contrary, that our decisions should be not at all influenced by our expectations about their effects more than a billion years in the future.

Stein 2023, When We Don’t Know What We Owe

  • https://www.law.georgetown.edu/public-policy-journal/wp-content/uploads/sites/23/2024/02/Joshua-Stein-.pdf

Tarsney 2023, The Epistemic Challenge to Longtermism

  • https://philarchive.org/archive/TARTEC-4
  • I develop two simple models for comparing ‘longtermist’ and ‘neartermist’ interventions, incorporating the idea that it is harder to make a predictable difference to the further future. These models yield mixed conclusions: if we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these ‘Pascalian’ probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
  • The Control Challenge (rough) There’s simply nothing we can do to substantially improve the far future. That is, even if we were maximally informed, we would still lack the necessary power or influence to make any sufficiently important and persistent difference.
  • The Epistemic Challenge (rough) Even if there are actions available to us that would substantially improve the far future, we lack the epistemic capacities necessary to distinguish those actions from actions that would either worsen the far future or have no substantial effect. As a result, none of the actions available to us substantially improve the far future in expectation.

Hájek 2024, Consequentialism, Cluelessness, Clumsiness, and Counterfactuals

  • I find this paper pretty difficult to follow. It mostly seems to be about longtermism anyway.
  • https://globalprioritiesinstitute.org/wp-content/uploads/Alan-Hajek-Consequentialism-Cluelessness-Clumsiness-and-Counterfactuals.pdf
  • According to a standard statement of objective consequentialism2 , a morally right action is one that has the best consequences. This account glides easily off the tongue—so easily that one may not notice that on one understanding it makes no sense, and on another understanding, it has a startling metaphysical presupposition concerning counterfactuals. I will bring this presupposition into relief. Objective consequentialism has faced various objections, including the problem of “cluelessness”: we have no idea what most of the consequences of our actions will be. I think that objective consequentialism has a far worse problem: its very foundations are highly dubious. Even granting those foundations, a worse problem than cluelessness remains, which I call “clumsiness”. Moreover, I think that these problems quickly generalise to a number of other moral theories. But the points are most easily made for objective consequentialism, so I will focus largely on it. I will consider three ways that it might be rescued: 1) Appeal instead to the not-too-specific, short-term consequences of actions; 2) Understand consequences with objective probabilities; 3) Understand consequences with subjective/evidential probabilities.
  • Well, perhaps they could think that there is rampant indeterminacy about whether a given action is morally better than another in almost all cases. But that’s a huge bullet to bite: it is often platitudinous what the 5 right verdict is. For example, it’s true that donating to Oxfam is morally better than going on a serial killing rampage. Any view that says otherwise is absurd, and perhaps even pernicious.
  • But this understates how bad the situation is for objective consequentialism. For suppose that we could somehow solve the problem of cluelessness—say, God tells you these alleged facts. Then what? The trouble is that it is simply not under your control to realise these conditions in one precise way rather than another. Much as you may want to arrive at the 17- millisecond time (say), you cannot so finely tune your actions so as to do so, rather than arriving at the 18-millisecond time. You are clumsy. When it comes to these extremely fine-grained actions, you are a klutz. By the standards of acute sensitivity to the exact initial conditions of subsequent history, you are ham-fisted, unable to steer things exactly this way rather than a closely-neighbouring that way. These exact arrival times are not genuine options for you: you cannot decide to realise one rather than another. And the “actions” that consequentialism evaluates should not be mere behaviours; they should be options that you can decide among.

Tarsney, Thomas and MacAskill 2024, Moral Decision-Making Under Uncertainty

  • https://plato.stanford.edu/entries/moral-decision-uncertainty/
  • The problem, in brief, is that we often feel clueless about the long-term (or otherwise non-immediate) consequences of our actions; insofar as what we ought to do depends on those consequences, this suggests that we are often clueless about what we ought to do. Such “cluelessness” has been seen by some as an objectionable feature of moral theories—including, but not limited to, standard forms of act consequentialism—that give significant weight to such long-term consequences. Why objectionable? Perhaps a desideratum for a moral theory is that it usually provides actionable advice. More specifically, though, insofar as we are clueless about the long-term effects of our actions, it seems plausible that we should be able to reliably figure out what to do by simply ignoring those effects. But it is at least unclear how act consequentialism (and theories that worry about consequences in a similar way) can license such a move. This objection was developed at length by Lenman (2000), starting from a line of thought sketched but not endorsed by Kagan (1998: 64).
  • One natural interpretation of “cluelessness” is that one has no evidence concerning the relevant consequences. Suppose right now I can pick up a pen either with my left hand or with my right. For the sake of illustration let us grant that it is predictable that, through the “butterfly effect”, doing one of these actions rather than the other will lead to a greater number of destructive typhoons over the next millennium. Nonetheless, I seem to have no evidence whatsoever which action would do so. To put it another way, when it comes to typhoons, the evidence in favour of using my left hand seems to be perfectly symmetrical with the evidence in favour of using my right. Surely, then, no matter how bad the additional typhoons would be, I can simply set this consideration aside. However, this conclusion appears to rely on a “principle of indifference” to the effect that, given my lack of evidence, the probability that my left hand leads to more typhoons is the same as the probability that my right hand does. Tempting as this may be, indifference principles are “notoriously vexed” (Lenman 2000: 354; see also section 4.2 of the entry on Bayesian epistemology) and part of Lenman’s argument is scepticism that a strong enough indifference principle is available. Greaves (2016), in contrast, defends indifference reasoning in cases like this of “simple cluelessness” involving evidential symmetry. However, she thinks that there is still a problem with cases of “complex” cluelessness, cases in which there is different evidence pointing in each direction but it is unclear how to weigh it up. Even if Greaves is right that simple cluelessness is unproblematic, this may not help much, if genuine evidential symmetry is not the normal case (Yim 2019; Greaves 2016, VII).
  • Diagnosing cluelessness in terms of a lack of evidence or in terms of complex evidence still does not tell us why cluelessness about consequences leads to cluelessness about what one ought to do. Indeed, as we have seen, a moral theory’s verdicts about what we ought to do often take into account uncertainty about the consequences of our actions. So, uncertainty about the consequences of our actions does not imply uncertainty about what we ought to do. For example, Jackson’s doctor is uncertain what pill will have the best consequences for her patient, but nonetheless knows, in light of that uncertainty, which pill she ought to prescribe. To generate a problem, the sense in which we are clueless about the long-term consequences of our actions must have some upshot that makes it harder to deal with than other sources of uncertainty.
  • One possible upshot of cluelessness is that the decision-relevant probabilities are hard to know or even to estimate precisely. The issue of precision is important, as is the idea that the long-term consequences of one’s actions can have large (perhaps even infinite!) value. One might have thought that most of our actions will turn out to have little net impact on the far future: the effects die out (or cancel out) over time “like ripples in a pond” (Smart 1973: 33). Lenman argues against this claim, based in part on the common view that the identities of future people depend sensitively on what we do now. Instead, at least some possible consequences of our actions have very high value, systematically affecting many people over a long span of time. But if some possible consequence of our choice is large in value, then a small change in the probability of that consequence will make for a large change in expected value. If you cannot estimate the probabilities very precisely, then, you could be clueless about the expected values of your options and about what you ought to do. [15] When thinking about long-term consequences, such precise estimates seem hard to come by. (We’ve phrased this and much of the rest of the discussion in terms of expected value maximization, but it should be clear that the issues are more general.)
  • At any rate, on this first reading, the problem with cluelessness is that not only is there uncertainty about the consequences of one’s actions, but one is also unable to access the decision-relevant probabilities to sufficient precision.
  • So far, we have written as if, in the relevant cases, one cannot get much of an idea of what one ought to do without first doing something along the lines of an explicit expected value calculation, eliciting the probabilities of various outcomes with adequate precision. But even if expected value maximization is the criterion of rightness, it does not follow that calculating expected value is the only, the best, or even a viable decision-procedure (Railton 1984; Jackson 1991; Feldman 2006). The literature on heuristics (see, e.g., Gigerenzer & Gaissmaier 2011) and on “decision-making under deep uncertainty” (see, e.g., Helgeson 2020; Marchau, et al. 2019) can be interpreted as proposing methods for decision-making that are more tractable and that bypass various kinds of cluelessness (Thorstad & Mogensen 2020; Mogensen & Thorstad 2022; Steele & Stefánsson 2021: 8.4). But, at least at a first glance, this only changes the target of the cluelessness objection: aren’t we also clueless about which decision-procedures will do well in any given case (cf. Mogensen & Thorstad 2022: 3.2)? While environmental feedback and empirical study can help identify procedures that tend to perform well with respect to relatively short-run and familiar sorts of consequences, it is at least unclear how to identify procedures that would tend to perform well with respect to very long-term consequences, about which feedback is much harder to obtain.
  • Things are arguably rosier if we ask, not which decision-procedure will lead to the right act in a given situation, but which decision-procedure it would be best to adopt repeatedly, to cover a range of future decisions. Similarly, we can ask, in the spirit of rule consequentialism, what decision-procedures it would be best for everyone to adopt. In some cases, at least, the long-term effects of universally and/or repeatedly applying a decision-procedure may be easier to predict than those of applying it once-off (Burch-Brown 2014). If so, rule consequentialism (and, more generally, moral theories that worry about the consequences of widespread rule-adoption, rather than the consequences of individual actions) may be in a better position than act consequentialism when it comes to cluelessness, although how much better is difficult to tell.