This recent piece in Scientific American, adapted from John Horgan’s talk at Stevens Institute of Technology, reviews the vast scope and questionable effectiveness of the cancer research and treatment industries.
Each year, 1.7 million Americans are diagnosed with cancer, while 600,000 Americans die from the disease annually. The annual cost of cancer treatment is estimated to reach $175 billion this year. The total amount spent on cancer research since Richard Nixon declared a “war on cancer” in 1971 has surpassed at least $250 billion.
Despite these investments, there has been little progress in reducing the mortality associated with most cancers. The age-adjusted mortality rate from cancer — that is, the rate of cancer deaths in the population adjusted for the fact that older people are more likely to develop the disease — is the same that it was in 1930. While cancer mortality has decreased 30% from the early ’90s, this only occurred after decades of increasing cancer mortality; more importantly, analyses have suggested both this decrease and the prior increase can be almost entirely attributed to changes in smoking rates over time.
Cancer-related clinical trials have the highest failure rate of any therapeutic area, and various hypotheses to explain the causes of cancer — hormones, viruses, genetics, carcinogens, etc. — have generally failed to yield effective treatments. Drugs approved by the FDA between 2004 and 2014 extended survival by an average of only 2.1 months, and patients’ annual treatment costs exceed $100,000. Immunotherapy, despite substantial press coverage and patient interest, can benefit fewer than 10% of patients and costs more than $1,000,000. More than 40% of those receiving a cancer diagnosis in the United States will lose their life savings within two years.
The benefits of cancer screening are similarly questionable. Research over the past decade has shown our bodies regularly develop and treat cancers without any clinical intervention. Repeated analyses of mammography, prostate-specific antigen (PSA) screening, and other forms of early cancer detection have clearly shown they do little to reduce cancer mortality. More importantly, many cancer screening procedures have a high rate of false positives and subject healthy patients to potentially harmful treatments, including surgery, chemotherapy, and radiotherapy. This has led to widespread calls to discontinue screening programs, as some claim the programs’ high costs, harmful effects, and tendency to lead to overtreatment outweigh any benefits of early detection.
The size of the cancer industry has also fostered corruption and conflicts of interest. The 1,200 cancer centers in the United States spend $173 million annually on advertising, often leveraging emotional appeals that provide patients with unrealistic expectations about the effectiveness of treatment while entirely suppressing the associated costs. Cancer specialists can be paid by drug companies to prescribe certain drugs, which incentivizes them to describe these drugs in the terminology we often see in cancer therapy: “breakthrough,” “miracle,” “game-changer,” etc. The massive commercial incentives have led to cancer research’s own reproducibility crisis, with multiple analyses finding the majority of highly cited cancer trials fail to replicate when independently tested.
In sum, the article describes a massive treatment and research infrastructure that, despite its size, public prominence, and supposed importance, has demonstrably failed to deliver meaningful benefits to patients. The author notes this has led some doctors to begin practicing “conservative medicine,” acknowledging the limited impact novel pharmacology has had on the course of disease and instead relying on fewer, simpler treatment methodologies.