Reducing the public health burden of Australia

Given the health benefits of eliminating meat and dairy consumption, I’ve often wondered whether a public health campaign around diet, similar to those performed historically around the world for tobacco and other damaging substances, could result in a net positive for a society. The rationale is that the costs, presumably spent by a government, would be outweighed by the gain from the reduced public health burden. Here I’ve attempted a simple estimate of this. There is already a vast body of research available for the health benefits of a plant based, whole foods diet, and so I haven’t spent too long on this.


90% of all deaths in Australia in 2011 were the result of chronic disease according to the Institute of Health and Wellness. 50% of the Australian population has at least 1 chronic disease, and 20% have 2 or more. Populations with a diet full of plant based food have a lower blood pressure,  lower risk of type 2 diabetes, and a lower risk of death from cardiovascular disease (CVD). A plant based diet can even prevent and reverse erectile dysfunction. Diet related issues in 2010 contributed to the burden of disease in the US more than smoking, high blood pressure and high blood sugar.

From 2004-2005 total health expenditure in Australia was $81.1 billion, $52.7 of which is attributable to specific disease categories. 29% of this expenditure was through admitted patient hospital services, 16% of out-of-hospital medical services, 11% for prescription pharmaceuticals and 7% for optometry and dental services. CVD accounted for $5.942 billion alone.

Given such high costs to society from chronic diseases that are treatable through dietary changes, might it be reasonable to assume that a public health campaign focused on diet, similar to the campaign against smoking, could yield significant returns to the government and a tax payer? Several similar campaigns have existed (e.g. Shape Up Australia), though these have lacked the focus and intensity the anti-smoking campaigns had. To determine whether this might be reasonable may take a major study. But we can take a series of assumptions, applying a worst case scenario for each, to estimate the costs and returns of such a campaign.

If we assumed that the only cost to society of chronic disease is the cost to public health, and the only chronic disease related to diet is CVD, then there is a cost of $5.942 billion. The first assumption here isn’t true, as chronic disease leads to decreased productivity and lost time in the workforce. Let’s assume now that only 50% of CVD can be treated through dietary changes (this is not true, and in fact almost all cases of CVD are treatable through diet change – see the end of this piece for a full list of related references). Therefore $2.971 billion of the cost from CVD can be eliminated.

The next step is to ask how much a public health campaign around diet might cost. A campaign that covered Sydney and Melbourne from 1983 to 1987 cost $620,000 ($1,560,700 in 2015 dollars) for the media and a ‘Quit Centre’ in Sydney. The population of Sydney in 1986 was 3,472,000. Assuming, accounting for inflation, that it costs the same to provide similar services per person today it would cost $10,768,800 to implement a national program for 4 years (population of 23,958,000 today, which is 6.9 times higher than the population of Sydney in 1986, so the cost is multiplied by 6.9). Again, this is likely conservative as it assumes there is no benefit from economies of scale in reaching the entire nation compared to just one city.

Now we can ask how effective such a campaign might be. The pilot anti-smoking campaign in Sydney and Melbourne immediately reduced smoking prevalence by 2.6%, and by a further 0.75% each consecutive year. Note that these percentages refer to the drop in smoking prevalence of the entire population, not just the smokers, which were around 38% of the population in Sydney before the campaign. As the percentage of people who don’t eat a plant-based whole food diet in Australia is significantly higher (over 90%), this estimate is even more conservative. We might assume that the dietary campaign would only be 50% as effective as the anti-smoking campaign, which is conservative as smoking is addictive and harder to quit than dietary practices. So we have a campaign that we estimate will reduce poor dietary practices by 1% immediately and an additional 0.375% each year. Going back to our figure of $2.971 billion for treatable CVD, we get an initial benefit of $29.71 million, with an ongoing benefit of $11.14 million per year. After 4 years, this results in a total benefit of $82.36 million for a cost of $10.77 million. This is a return on investment of over 7 times even with the generous assumptions.

The figures for cost and effectiveness of the anti-smoking campaign used here are around the same order as similar programs undertaken in USA from 1989 to 1996. This assumes that the reduction in smoking from the Sydney and Melbourne campaigns are entirely attributable to the campaign, though this assumption is supported by the data.

The estimates presented here are relatively rough, but given the generous assumptions made, it is clear that a detailed study on the costs and benefits of such a program is long overdue, and that it’s time to have a conversation about implementing a public health campaign that advocates for a plant-based, whole food diet.


The road to such a campaign is expected to be long, as Australia’s peak body for health advice and medical research, NHMRC, still recommends meat and dairy consumption as part of a healthy diet despite evidence otherwise. However, given the great expected reduction in Australia’s public health burden and the other benefits of it being significantly better for the environment (the livestock industry is responsible for the most greenhouse gas emissions of any sector) and drastically reducing unnecessary animal suffering, it is a cause worth promoting.

The last two points I have covered previously here.

Thanks to Micaela Karlsen for providing references, working with me and reading early drafts of this work.

References

Esselstyn CB, Jr., Ellis SG, Medendorp SV, Crowe TD. “A strategy to arrest and reverse coronary artery disease: a 5-year longitudinal study of a single physician’s practice.” [In eng]. J Fam Pract 41, no. 6 (Dec 1995): 560-568.

Esselstyn CB, Jr., Favaloro RG. “More than coronary artery disease.” [In eng]. Am J Cardiol 82, no. 10B (Nov 26 1998): 5T-9T.

Esselstyn CB, Jr. “Changing the treatment paradigm for coronary artery disease.” [In eng]. Am J Cardiol 82, no. 10B (Nov 26 1998): 2T-4T.

Esselstyn CB, Jr. “Updating a 12-year experience with arrest and reversal therapy for coronary heart disease (an overdue requiem for palliative cardiology).” [In eng]. Am J Cardiol 84, no. 3 (Aug 1 1999): 339-341, A338.

Esselstyn CB, Jr. “In cholesterol lowering, moderation kills.” Cleveland Clinic journal of medicine 67, no. 8 (Aug 2000): 560-564.

Esselstyn CB, Jr. “Resolving the Coronary Artery Disease Epidemic Through Plant-Based Nutrition.” Preventive cardiology 4, no. 4 (Autumn 2001): 171-177.

Esselstyn CB, Jr. “Is the present therapy for coronary artery disease the radical mastectomy of the twenty-first century?” [In eng]. Am J Cardiol 106, no. 6 (Sep 15 2010): 902-904.

Ornish D, Scherwitz LW, Billings JH, et al. “Intensive lifestyle changes for reversal of coronary heart disease.” [In eng]. Jama 280, no. 23 (Dec 16 1998): 2001-2007.

Expert opinion or simple model: Which is better?

I saw a very interesting talk at work today about decision making in oil and gas businesses, and thought it had some pretty neat applications for decision making in general. I’d just like to summarise the research by David Newman who is studying his PhD at the University of Adelaide in the Australian School of Petroleum. He has 35 years experience in the oil and gas industry and in decision making. Unfortunately I don’t have full references for a lot of the work due to the format of the presentation and have tried to provide credit where possible.


The premise is that oil and gas projects (the exploration, development, drilling and production of petroleum) struggle to achieve promised economic outcomes in hindsight. Research has shown that a good predictor of outcomes is the level of front end loading (FEL), or exploration, feasibility studies and analysis, completed at the final investment decision (FID), when the full blown project is given the final go-ahead.

The value of FEL is well known and many individuals and companies advocate its use, but in reality it is not used or used poorly. More commonly, expert opinion is used. A common situation is expert opinion overruling a work of analysis because they claim that this project in particular is somehow ‘different’ or ‘unique’ compared to other projects.

As we know from research in the non-profit sector, expert opinion is very often wrong, and is not a substitute for data and analysis, and so it is no surprise that it holds little value in other industries as well.

However, Newman proposes that expert may be a viable substitute if and only if it passes 4 tests:

  • Familiarity test – Is the situation similar to previous known examples?
  • Feedback test – Is ongoing feedback on the accuracy of the opinion good? If evidence is received that expert opinion is not working for the given situation, immediately review. This is notoriously difficult for projects with multi-year lifespans, such as oil and gas projects and charity programs.
  • Emotions test – Is there a possibility that emotions are clouding the expert’s judgement?
  • Bias test – Is there a possibility that the expert is succumbing to some kind of bias? It is hard to be a dispassionate expert on an issue.

There is a belief that data and models are only better at predicting outcomes than expert opinion if they are complex and advanced. Meehl’s work shows that even simple models are better than expert opinion in the majority of cases. 60% of comparisons showed that the simple model was better, and the majority of the remaining 40% showed something close to a draw.

To understand the phenomena at play, Newman and his colleagues interviewed 34 senior personnel from oil and gas companies with an average of over 25 years experience in the industry. The personnel were a mix of executives (vice president level or equivalent), managers and technical professionals (who were leaders in their own discipline).

The survey data showed that ~80% saw FEL as very important, ~10% as important, with none saying it was not important.* However, none of those surveyed use the results from FEL as a hard criteria. That is to say, none are willing to approve or reject a project based on FEL data alone. Many used FEL as a soft criteria, in that it guided their final decision, but had no veto power. The results of this survey are not statistically significant due to small sample size, but according to Newman may be seen as indicative.

Interestingly, the executives tended to rate their understanding of the technical details of projects higher than the actual technical experts. Either the executives are over confident, the technical staff are under confident, a combination of both, or, seemingly less likely, the executives really are more competent in technical matters.

Newman proposes the following set of solutions to overcome the problems discussed here.

Apply correction factors to predict likely outcomes based on FEL benchmarking (comparison to other projects). This is difficult in oil and gas due to the differing nature of projects, and is expected to be a problem in charity programs as well. It might be worthwhile looking at programs that have done similar work in an attempt to benchmark, or at least previous programs within the same organisation.

Benchmarking can be a checklist to score against a certain criteria. For example, a dispassionate outsider can be brought in to answer pre-determined questions and provide an assessment based on data (and only data, without interpretations) from the team. They might also rate individual categories as poor, fair, good or best.

The adjustment factors will vary significantly between different types of projects, however the table below provides an example for two factors, cost and schedule, which have been rated by an external auditor. If the schedule has been rated as poor, as in the schedule pressures are likely applying pressure and biasing results (being behind schedule makes staff more likely to say the project is complete), you should adjust the appropriate data by a scalar of 1.1-1.5 (or inverse). My interpretation of this is that if long term costs are expected to be $100/week, and the scalar of 1.4 is selected due to the project being behind schedule, the true cost should be estimated as $140/week. The ranges are examples only, and the ideal values for a given type of project can only be determined through extensive analysis of that type of project, which can make this type of analysis difficult to be meaningful if substantial data isn’t available.

CostSchedule
Best0.9 - 1.150.9 - 1.15
Good0.95 - 1.20.95 - 1.25
Fair1.0 - 1.31.05 - 1.4
Poor1.05 - 1.451.1 - 1.5

Apply post-mortem analyses, or reviews of projects after completion.

Apply pre-mortem analyses. This involves asking everyone involved in the project to imagine that the project has concluded its life, and a disaster has occurred. They are then asked to propose why the project failed. This increases the chances of identifying key risks by 30% (no source beyond Newman for this unfortunately, but it’s a huge result). The reason being that it legitimises uncertainty, and makes staff more likely to think of obscure lines of thought or things that might be considered rude to bring up under different circumstances. Calling a team members work a risk would be uncomfortable in other situations.

I’d be interested to see some of these techniques being applied in non-profits and EA organisations more if they aren’t already, especially the pre-mortem technique. If the data is to be believed then it is a highly effective exercise. Also interested to hear your thoughts as to how they could be applied, or whether you think they are useful in the first place.

Again, there are several references to the work of other researchers that I would love to have referenced, however was unable to as the reference was not provided.


*In my personal opinion, the way these surveys are structured may lead to some bias themselves. For example, the 4 choices for this part of the survey were ‘very important’, ‘important’, ‘neutral’ and ‘not important’. It doesn’t seem likely that anyone perceived to be an expert would say a concept known to be important is important.