Moral realism is the idea that there is such a thing as a moral fact. It is often used to refer to the existence of a single true or best code of ethics, and if anyone disagrees with this code, they are wrong. I have previously subscribed to this idea, though while I wish this were true, I do not believe it to be the case. However, I would like to propose an intermediate version of moral realism which I think is accurate.
Let us examine the two broad schools of ethical thought; deontology and consequentialism (or utilitarianism). These each have many sub-categories, but for now I will just consider them in general.
Deontology suggests that there are some actions that are always wrong, regardless of the outcome. Examples include lying, killing (usually only referring to humans or some subset of all sentient minds), and stealing. Deontology often, but not always, draws on the principles of a particular religion.
Consequentialism is somewhat the opposite of deontology. An action is defined as ‘good’ or ‘bad’ based on its outcome. For example, lying or killing someone might be good if it saved 100 people from dying. From a purely theoretical standpoint, there is no action that couldn’t be justified if the outcome was sufficiently positive.
I would like to propose that a consequentialist code of ethics that seeks to maximise the amount of wellbeing and/or minimise the amount of suffering of sentient minds in the universe (or some slight variation of this), is the best possible code of ethics, if any could be considered ‘best’ (I’m not the first one to say something like this). This is sometimes known as total classical utilitarianism. I argue that this is the case because it is the only code of ethics that actually includes the felt experiences that sentient minds care about (again, this could instead be some slight variation of total classical utilitarianism). To make my case, I will use several examples.
Many actions are only seen as ‘bad’ because they have historically been largely associated with causing suffering, and having them as social norms cause suffering. Lying is bad because being lied to feels bad, and it creates societal norms that result in bad consequences (suffering). Killing humans is bad because a societal norm of killing people for no reason causes suffering. At the end of the day, people only care about suffering and wellbeing – they are the only felt experiences. Everything else is a means to that end whether they accept it or not.
If someone thinks they they fundamentally/intrinsically care about something else, I argue that they are wrong or misguided. Intellectual pursuits are desirable because it brings one pleasure. Freedom is desirable because it is almost always associated with positive felt experience and a lack of suffering.
In a non-human animals context, some argue that rights are what are most important, intrinsically so. However, if animals care about anything at all (and I think they do), it’s avoiding suffering and having pleasurable experiences. It doesn’t make sense for humans to impose our construct of rights or deontology on them. Again, rights for animals are useful because it will probably mean we can’t exploit some 80 billion land animals each year for food, causing much suffering in the mean time.
But – the rights aren’t intrinsically valuable themselves, and it is easy to construct realistic scenarios where not having certain rights like freedom from exploitation are in the best interests of the animals. Just like a parent will sometimes stop a child from doing something that is not their best interest (even though it might be freedom), we should feel comfortable stopping an animal from doing something that is not in their best interest.
In conclusion, if someone thinks that they or others don’t care about wellbeing or suffering, they are wrong (I argue). If they think they or others only care about rights or rules for their own sake, they are wrong. I can’t make you care about ethics or ‘doing good’ (though I can certainly try), but if you do, I argue you should be utilitarian, otherwise you are applying values that no sentient mind actually cares about intrinsically, and that’s selfish at best.