Open questions of importance

These are some questions that, to my knowledge, are unsolved and yet are important. I’m thinking about them, but if you think they’re already solved or you have an answer, please let me know.

I will try to keep this updated, so check back from time to time.

Animal agriculture

  • Is there a link between antibiotic use in animal farming and global antibiotic resistance?
    • If so, what impact would reducing antibiotic use in animal farming by X% have on reducing the number of current and future human deaths?

Climate change

  • How much would a reduction in emissions of a given % in a given year reduce the number of deaths at a given time in the future?

Philosophy

  • From a classical hedonistic utilitarian framework, how do we actually weight pleasure vs pain?
    • Is the Worst Possible World (as described by Sam Harris) equally bad as the Best Possible World is good?
    • Is the most pain a being can experience equal and opposite to the most pleasure a being can experience? Does this depend on species, and can this be altered (as described by David Pearce)?
    • Given that we can possibly edit out suffering, what are the chances that all species happen to be at a state where their potential for pleasure is equal and opposite to their potential for pleasure?
    • Does it make sense to talk about weighting pleasure vs suffering?
  • What is the expected value of the future?
    • Given that things in the future could go really well or really badly, what is the average outcome? Is it, on average, positive or negative? The answer to this question is very important for thinking about the value of reducing existential risk.

Sentience

  • It is often assumed that digital systems (e.g. artificial general intelligence, or even some software today) will be sentient and thus worthy of individual moral consideration. I propose that this is non-obvious, and even a small amount of uncertainty makes this an important question to consider. A universe full of artificial intelligence that happens to not be sentient would be dark indeed.
  • Is fundamental physics capable of experiencing something like suffering? If so, this changes the very nature of morality such that it would be unrecognisable.
  • How do we weight the pleasure/pain of beings with different levels of sentience?
    • Many agree that an insect, even if it were sentient, would have a lower level of moral consideration than a human, simply because it has a lower capacity for feeling pleasure/pain. But how do we rigorously define how much less worthy it is? Some have proposed using ‘number of neurons’ on a linear or non-linear scale to determine relative moral consideration some beings have, but I would like to see this more rigorously grounded in neuroscience, and expanded to include a the possibility of digital sentience, which might not have something like neurons.

Animal activism

  • Is there a chance that, as cellular agriculture (lab meat) becomes cheaper than animal agriculture, could animal agriculture producers be incentivised to cut costs to stay competitive, e.g. by cutting back on animal welfare? How much worse would this make working on cellular agriculture?
  • What are the far future effects of animal advocacy (e.g. convincing someone to be veg*n may make their kids and their kids kids etc. more likely to be veg*n), and how do they affect the estimates of effectiveness of animal advocacy strategies.
  • What kind of diets most reduce suffering?
    • Many will stop with ‘veganism’, however the average vegan kills 0.3 non-human animals per year as the result of their diet (according to a simplistic calculation by Matheny), and this doesn’t include insects. There is clearly room for optimisation beyond simply being vegan. However, there is startlingly little research on this.
    • For example, if wheat production killed more insects per calorie produced than fruit, it might be reasonable to say that one should eat least bread and more fruit.

Psychology

  • Humans are notoriously bad at thinking about risk (for example people are less likely to act on a risk if they’re told the probability of the event in percentage terms, e.g. 50% vs 1 in 2), and people gamble, which is almost always a terrible investment.
    • How can we use these principles to more effectively communicate existential risk and promote adequate concern?