Why non-utilitarians are wrong (unless you’re a moral nihilist)

Moral realism is the idea that there is such a thing as a moral fact. It is often used to refer to the existence of a single true or best code of ethics, and if anyone disagrees with this code, they are wrong. I have previously subscribed to this idea, though while I wish this were true, I do not believe it to be the case. However, I would like to propose an intermediate version of moral realism which I think is accurate.

Let us examine the two broad schools of ethical thought; deontology and consequentialism (or utilitarianism). These each have many sub-categories, but for now I will just consider them in general.

Deontology suggests that there are some actions that are always wrong, regardless of the outcome. Examples include lying, killing (usually only referring to humans or some subset of all sentient minds), and stealing. Deontology often, but not always, draws on the principles of a particular religion.

Consequentialism is somewhat the opposite of deontology. An action is defined as ‘good’ or ‘bad’ based on its outcome. For example, lying or killing someone might be good if it saved 100 people from dying. From a purely theoretical standpoint, there is no action that couldn’t be justified if the outcome was sufficiently positive.

I would like to propose that a consequentialist code of ethics that seeks to maximise the amount of wellbeing and/or minimise the amount of suffering of sentient minds in the universe (or some slight variation of this), is the best possible code of ethics, if any could be considered ‘best’ (I’m not the first one to say something like this). This is sometimes known as total classical utilitarianism. I argue that this is the case because it is the only code of ethics that actually includes the felt experiences that sentient minds care about (again, this could instead be some slight variation of total classical utilitarianism). To make my case, I will use several examples.

Many actions are only seen as ‘bad’ because they have historically been largely associated with causing suffering, and having them as social norms cause suffering. Lying is bad because being lied to feels bad, and it creates societal norms that result in bad consequences (suffering). Killing humans is bad because a societal norm of killing people for no reason causes suffering. At the end of the day, people only care about suffering and wellbeing – they are the only felt experiences. Everything else is a means to that end whether they accept it or not.

If someone thinks they they fundamentally/intrinsically care about something else, I argue that they are wrong or misguided. Intellectual pursuits are desirable because it brings one pleasure. Freedom is desirable because it is almost always associated with positive felt experience and a lack of suffering.

In a non-human animals context, some argue that rights are what are most important, intrinsically so. However, if animals care about anything at all (and I think they do), it’s avoiding suffering and having pleasurable experiences. It doesn’t make sense for humans to impose our construct of rights or deontology on them. Again, rights for animals are useful because it will probably mean we can’t exploit some 80 billion land animals each year for food, causing much suffering in the mean time.

But – the rights aren’t intrinsically valuable themselves, and it is easy to construct realistic scenarios where not having certain rights like freedom from exploitation are in the best interests of the animals. Just like a parent will sometimes stop a child from doing something that is not their best interest (even though it might be freedom), we should feel comfortable stopping an animal from doing something that is not in their best interest.

In conclusion, if someone thinks that they or others don’t care about wellbeing or suffering, they are wrong (I argue). If they think they or others only care about rights or rules for their own sake, they are wrong. I can’t make you care about ethics or ‘doing good’ (though I can certainly try), but if you do, I argue you should be utilitarian, otherwise you are applying values that no sentient mind actually cares about intrinsically, and that’s selfish at best.

4 thoughts on “Why non-utilitarians are wrong (unless you’re a moral nihilist)”

  1. Except the problem with utilitarianism is that it has nothing to say about *why* utility (however it’s defined) is desirable – it just takes it as assumed that pleasure is de facto desirable and runs with it. While deontological perspectives have their issues, they at least address this question, whether it be “because God, the creator of the universe and us, said so” or because of some other basis, most notably in the case of Kantian ethics.

    Also, while it’s easy to design cases where deontological precepts clash with our intuitions, it’s also not that difficult to do so for utilitarian approaches. Wireheading is a commonly cited example, but there are others of varying suitability depending on how you define “utility”.

    1. I don’t think any ethical system can really compel people to be ethical. A religious deontologist could just say ‘I don’t care what [deity] thinks’. I just think that, if anything is to be valued at all, it should be wellbeing and a lack of suffering, since I stand by the idea that those two things are the only things sentient minds intrinsically value.

      I’m aware of wireheading, but am not sure how or why that clashes with intuitions, could you elaborate? I should say though that I have a taste for bullets and don’t mind following things to their extreme logical conclusions.

      1. That was in response to your comment (apologies for the format, I don’t know how to do blockquotes):

        “But – the rights aren’t intrinsically valuable themselves, and it is easy to construct realistic scenarios where not having certain rights like freedom from exploitation are in the best interests of the animals. Just like a parent will sometimes stop a child from doing something that is not their best interest (even though it might be freedom), we should feel comfortable stopping an animal from doing something that is not in their best interest.”

        My reading was that this was intended as a counter-argument to the idea that rights are, in themselves, desirable. The same broad argument could be made against e.g. pleasure being in itself desirable, using wireheading as a counter-example.

        For example, if pleasure is, in itself, desirable, then one should prefer a shorter but more pleasurable life than a longer but less pleasurable life, provided the sum pleasure is greater. So better 30 years at 100 pleasure per year than 80 years at 20 pleasure per year. More importantly, there is no reason we should limit such a calculation to ourselves – if someone else is choosing the 80/20 option, then that is leading to less pleasure, so we should try to make them take the 30/100 option even if so doing would cause them to take less pleasure (up to the point where it becomes less pleasure overall), regardless of their will. Even potentially up to the point of forcing them to experience pleasure unwillingly (e.g. implanting electrical devices to stimulate the brain in pleasant ways).

        Now, obviously in reality there’s practical limitations, but it doesn’t seem immediately obvious to me that that argument conclusively proves that utilitarianist viewpoints are “most correct”. It suggests that in practical situations it provides the best guide/structure for behaviour, but that doesn’t seem to be the same thing as being philosophically “correct”.

        1. To be honest, I would just bite the bullet on wireheading, I don’t see why it poses any threat to utilitarianism. If a world where wireheading against ones’ will were common would produce the most wellbeing (which I don’t think it would for practical reasons), then I’d accept that. I don’t think our intuitions are particularly good guides for moral decision making, as our intuitions were designed for the African Savannah, not for thinking about wireheading and the far future.

Leave a Reply

Your email address will not be published.

%d bloggers like this: