Moral Hedging by Noah Simon and Jack Wareham
This article consists of two parts. The first section, written by Noah Simon, is a criticism of Moral Hedging and an endorsement of Moral Confidence. The second section, written by Jack Wareham, is an endorsement of Moral Hedging and a criticism of Moral Confidence. It has been brought to our attention that ‘Epistemic Modesty’ is the incorrect term for this issue. We will use the term ‘Moral Hedging,’ or ‘MH’ for short, which better reflects the fact that we hedge our beliefs against a multitude of normative theories.[1] The authors would like to thank Marshall Thompson, Christian Tarsney, Nina Potischman, and Bob Overing for their input.
I: Objections to Moral Hedging by Noah Simon
As a brief explanation of MH, it attempts to compare different moral goods across moral theories. It argues that because we are uncertain about the truth of competing moral theories we should weigh the strengths of the contention offense and frameworks in a given round. If Debater A is decisively winning their contention offense, marginally loses the framework debate to Debater B, and marginally loses the contention debate under Debater B’s framework, MH would say that Debater A should still win. To take a commonly used real-world example, if we are presented with a cup of coffee and we have 0.8 certainty that the coffee will taste good but have a 0.2 certainty that the coffee is poisonous, MH would say that the relatively low risk of great harm justifies not drinking the coffee even though there is a high chance of obtaining marginal pleasure.
However, this weighing mechanism is fundamentally infeasible. Let’s say we are 90% sure that deontology is true and 10% certain that utilitarianism is true, and Action X will prevent a million people from dying but, in doing so, would violate people's’ property rights. MH would say that we should endorse Action X, but why would they say this is justified? Because, as it seems, there is such a great utilitarian objection that it overrides the seemingly relatively small deontological objection to the proposition.
The move MH tries to make here is flawed. Here, ‘great’ and ‘small’ are normative terms, which appeal to some metric to determine what a great or small harm is: a moral heuristic. This heuristic will ultimately have to be framed in one of the different theories: either “so many lives are saved that it’s a good moral action” or “it is a profound rights violation.” That analysis appeals to only one framework, as there is no greater heuristic or framework that MH justifies with which different frameworks and offense can be evaluated. Therefore, MH collapses into whichever framework is most credible; this model is called ‘Moral Confidence.’ Moral Confidence says that the judge should only evaluate offense under the winning framework and not compare offense between different frameworks.
We might be able to say that something like breaking a deathbed promise would be “worse” under deontology than utilitarianism, but we could not successfully make such a complicated comparison as saying that breaking a deathbed promise would be worse under deontology than a migraine would be under utilitarianism. Gracely[1] furthers:
[U]tility involved in both choices can be computed under each system, and each system obligates the maximization of utility. So it would still seem the two could be compared. But, is the "utility" being maximized by one of them the same as that for the other, and does it have the same meaning? I would answer, no. [...] utility is relatively divorced from the entities experiencing it. [...] utility is treated as though it were an external substance, such as water, to be maximized in quantity by whatever means. [...] The obligation is not to maximize some kind of substance, but to maximally benefit those who live or who will live.
This isn’t to say that utilitarianism is easy to evaluate, but rather goodness constrained by the bounds of one theory is fundamentally more coherent than goodness across theories. Gracely[2] explains:
[T]he total utilitarian (TU), who includes the happiness of potential lives in the calculation of utility, is not expecting a different actual outcome from the person-regarding utilitarian (PRU), but is interpreting it differently. The two agree about the facts, and about the one hundred happy lives, but disagree about the role that those lives should have in an ethical analysis.
Regardless, the argument is clear: the comparison MH attempts to make is fundamentally impossible. There is no “external substance” that needs to be maximized since each framework will tell us the one and only thing we need to achieve. Discussing inter-theoretical goods is, as Chesterton says, “like discussing whether Milton was more puritanical than a pig is fat.”[3]
Additionally, MH is inconsistent with the moral theories for which it tries to account. Imagine doctors prescribing medicine to a patient.The patient has 50% confidence in Doctor X, 30% credence in Doctor Y, and 20% credence in Doctor Z. They each prescribe a different medication to you, Medicine A, B, and C, respectively. However, they all warn against mixing the medicines. This is very similar to framework debates. A utilitarian would prescribe the maximization of pleasure and would warn against allowing unnecessary suffering to occur, and a deontologist would prescribe the protection of rights, and would warn against any violation of rights. MH argues we should blend the different theories together--a mixing of the medicines. This is in fact prohibited by both theories, so MH actually runs counter to each proposed ethical theory.[4] MH appeals to maximizing the chance of a correct moral action, but every moral theory denies that there is the potentiality for maximizing moral action outside of doing actions that the theory itself would prescribe. Every moral theory denies that there is some relevance to maximizing “moral output” outside of doing the actions that it itself would prescribe. To summarize these arguments: if we’re 60% sure util is true and 40% sure deontology is true, we are 0% sure that we should use some combination of multiple ethical theories.
Moreover, MH compels actors to place undue credence in faulty theories, compromising our moral calculus. For example, we can be 10% certain that a theory that generates repugnant conclusions is true. If that theory says that some action, e.g. racial equality, is an infinitely bad thing, then we would be obligated under the MH paradigm to treat people racially unequally since any risk of an infinite harm outweighs a finite harm generated under another, more plausible moral theory. Considering Harman’s[5] example:
Bob’s daughter Sue has been asking him to teach her to drive [...] Bob has a 90% credence that he is morally required to teach Sue to drive. [...] According to the conservative speaker he’s been listening to, women should not drive and no one should teach a woman to drive; in fact that is a grave moral wrong. [...] Bob thinks the conservative speaker is probably wrong; he’s 90% sure of that. But Bob finds the conservative picture being offered somewhat compelling, so that he is 10% sure it is the correct picture. [...] Bob has a 90% credence that failing to teach Sue would be wrong, but not very seriously wrong; he has a 10% credence that teaching her to drive would be deeply morally wrong. [...] Uncertaintism holds that, if the conservative picture holds that teaching a woman to drive is wrong enough, Bob should not teach Sue to drive.
Some might respond by saying that the issue here is not with MH but rather Bob’s particular assignment of values. However, that is exactly the point: MH makes our moral views too flexible. It allows anyone, no matter how little credence we have in her, to lay strong claims to our decision calculus. MH can thus justify everything and nothing, reaching unacceptable and potentially paralyzing conclusions. This gets at a larger objection: contention debate should stay in the contention; normative discussion should remain superior. Attempting to mix the descriptive and the normative is futile and dangerous.
Besides being impossible, MH is a useless paradigm for debate. If we’re 40% sure that MH is true, but only 30% sure that this estimation is accurate, and 20% certain that that estimation is confident, how are judges possibly supposed to use it to evaluate rounds? Judges are not calculators; asking them to multiply the infinitely regressive credence values of theories with the different degrees of offense won under those frameworks is simply impractical.
Many arguments in favor of MH stem from the fact that there is uncertainty regarding the truth-values of different frameworks. However, this is predominantly irrelevant to the judge’s decision calculus. Even if there is uncertainty in the post-fiat world or in the real world, this is not something for which the judge can account. The judge can only evaluate the arguments made in round; they pick the winning framework based on who did the best debating. They cannot say something like “even though the neg was ahead on the framework debate, I know that moral certainty hasn’t been reached and is unlikely to be reached in a 40-minute debate between two high school students.” No external analysis of morality should matter in the light of actual arguments made on the flow. For example, it is highly uncertain that debaters could come to a clear conclusion regarding the validity of gun control. However, that doesn’t mean that the judges can disregard the entirety of the debate and presume based on their personal opinions.
If we’re uncertain about MH, we should just default to whatever we’re most confident about: the most credible theory. MH says we should pursue the course of action most likely to yield the most morally correct action, so the most likely framework would work best. Thus, because MH is both unusable and normatively unjustified, judges ought adopt a Morally Confident paradigm.
II: A Defense of Moral Hedging by Jack Wareham
As Moral Hedging has popularized, criticism of the theory has increased. There is a general belief that the principle can have strategic utility but is philosophically bankrupt. This section seeks to provide a debate-specific justification for the use of MH.
The argument for MH begins with the uncontroversial premise that moral uncertainty exists in debate rounds. Before we even discuss how (or if) uncertainty should change our moral calculus, it is clear that two debaters with thirteen minutes each will not get even remotely close to settling questions that philosophers have debated for millennia.
This lack of clarity is heightened in debate because framework debates are often layered such that correctly evaluating the round as a judge can be near impossible. If both debaters win “my standard is a prerequisite” arguments and there is no weighing done between these claims by either debater, then it is difficult for a judge to make a non-interventionist decision about which framework is better. Most varsity framework rounds contain at least some degree of this complication; debaters will often avoid engaging on the line by line or answering framework overviews by simply extending a meta ethic and asserting “this comes first.”
Many judges believe moral hedging requires an unacceptable degree of intervention in determining how much contention level offense is won. I propose the exact opposite: it is much more interventionist to evaluate a muddled framework debate and arbitrarily exclude one side’s offense than to assess the relative strength of link back to each framework. If a round comes down to conceded framework assertions and the debate is muddled due to a lack of weighing, picking one framework over the other will inevitably result in insertion of bias. The only way to pick a framework in such an instance would be to make a subjective judgment that Debater A’s arguments are of higher quality than Debater B’s, or that some other warrant or argument could interact in such a way to resolve the debate.
This is an incorrect way to assess the better debater. Given that frameworks exist for the purpose of establishing contentions (debaters do not win for proving util is true), ignoring contentions based on an already uncertain framework decision prioritizes muddled framework evaluation when one debater has demonstrably done the better job on a different layer of the flow. From this conclusion, moral hedging as a paradigm used to evaluate debate rounds seems necessary. If there is uncertainty in the framework debate, the contention debate ought to be used as a tiebreaker to determine who has done the better debating. If the framework debate is already exceptionally clear, conditions of moral uncertainty are not triggered and there is no need to evaluate strength of link. Moral hedging just gives us tool to deal with situations in which the framework debate is not easily evaluated.
Noah’s claim that goods under each framework are incomparable (the incommensurability objection) conflates moral hedging in general and moral hedging as a debate paradigm. The role of the judge as an objective adjudicator of offense is to determine how ‘great’ or ‘small’ offense is in the context of arguments made by debaters. Moral hedging does not ask judges to use a moral heuristic to determine how large the degree of offense is; they can just look at their flow. If there is conceded defense on an extinction scenario, then it is a ‘small’ utilitarian reason. If it is entirely conceded, it would seem to be a ‘great’ utilitarian reason. Moral hedging as a debate paradigm requires inter-theoretical value comparison, but whether a piece of contention offense is ‘great’ or ‘small’ is determined without a moral heuristic, and instead by how well debaters have performed. Deciding whether impact A is bigger under utilitarianism than impact B is under deontology is done entirely by counting defense and seeing what percentage of the offense still stands. In this instance, the inter-theoretical evaluation is permissible as it does not cause incommensurability.
Next, Noah says that moral hedging is inconsistent with the moral theories it tries to account for. This is problematic as most ethical theories do not tell us what to do under conditions of normative uncertainty. Kantianism says that if you act accordingly to Kantianism, you should prevent rights violations, but it does not tell us what to do if the antecedent is questioned. No ethical theory tells us what we ought to do if we are unsure about what the correct ethical theory is. If you are 51% sure that Kantianism is true, why should you act as if Kantianism is 100% true given that we have strong reasons to support another view of ethics?
Noah’s next argument operates neatly within the moral hedging paradigm. If sexism is bad and a certain framework justifies sexism, we have a strong reason to reject the framework that would outweigh its contention level offense. Moral hedging does not force us to accept it, rather it gives us certified grounds to reject it. Noah’s example here is hyperbolic and ridiculous. If a framework holds that racial equality is an infinitely bad thing, the opposing debater should have no trouble explaining why this framework is certainly false. Trouble might arise when the contention level offense is “infinite,” but it seems difficult to imagine a situation in which such a weighing mechanism under a standard could be established. The example given by Harman fails to represent how moral hedging should be applied, as it is irrational that Bob has given as much as 10% weight to the view that women should not be able to drive; he should be assigning a much lower credence to this view. Moreover, moral confidence would also seem to fall prey to the same problems that Noah discusses. Without moral hedging, a debater whose advocacy results in massive systemic oppression should win if they prove a 51% chance of Kantianism being true.
Noah’s final argument sums up a common thread of objections people make, which involve general confusion about how to use percentages in a debate round. Judges do not need to be calculators; they just need to look at their flow, which we do all the time in normal debates. If a debater wins that benefits to the economy outweigh benefits of protecting the environment, we must still evaluate the links back into each impact, rather than voting for a risk of offense on economic benefits. Similarly, a debater might win that education outweighs fairness, but if their theory interpretation only minimally benefits education but is severely abusive, there would still be reason to reject the interpretation. Instead of leaving it up to the debaters, judges should adopt moral hedging to evaluate rounds and line up debater’s offense underneath a paradigm that assigns a credence value to each ethical theory and combines that with the action’s value under that theory if it were true. Only then can we determine who really wins the debate.
[1] Gracely, Edward J. "On the Noncomparability of Judgments Made by Different Ethical Theories." Metaphilosophy 27.3 (1996): 327-32. Web.
[2] ibid
[3] Chesterton, G. K. "The Suicide of Thought." Orthodoxy. New York: Lohn Lane, 1909.
[4] Thank you to Marshall Thompson for explaining this argument to me.
[5] Harman, Elizabeth. "The Irrelevance of Moral Uncertainty." Oxford Studies in Metaethics (2014): n. pag. Web.