Skip to Content

Why Bother With Medical Journals and Whether They Are Honest?

ByRichard Smith, former editor of the BMJMarch 18, 2020

The following excerpt is reprinted with permission from The Trouble With Medical Journals (Taylor & Francis, 2000).

Stephen Lock, my predecessor as editor of the BMJ, taught me always to remember that journals would soon ‘be wrapping up next week’s fish and chips’. Something we expected to cause excitement often, he noted, had all the impact of ‘a doughnut in the North Sea’. Journals rarely cause change directly, but I want in this chapter to try and convince you that journals can have profound effects. They might do this through anything they publish, but I’m particularly interested in cases where the journals may have behaved poorly.

Any such debate today tends to begin with the case of the Lancet and the measles, mumps and rubella (German measles) vaccine (MMR) (1). Public health doctors froth at the mouth when telling this story. For them it illustrates the waywardness of journals and their ability to create havoc and cause extensive harm.

The Lancet in February 1998 published an article by Andrew Wakefield and others that suggested that there might be a link between children being given the MMR vaccine and developing a strange bowel condition and autism. The paper described 12 children who had the bowel condition and a developmental disorder, which in nine cases was diagnosed as autism. In eight cases the parents associated the child developing the condition with having been given the MMR vaccine. What the paper didn’t make clear, but an accompanying editorial did, was that these children were referred to Wakefield and others because they were known to be interested in a possible link between MMR and bowel disease (10).

Wakefield and others concluded that they had not proved a link between the vaccine and the syndrome of bowel disease and autism, and the accompanying editorial said the same. The editorial — which I’ve heard an editor of a paediatric journal call ‘incomprehensible’ — predicted, however, that the paper might cause what journalists love to call a ‘scare’. ‘Vaccine safety concerns such as that reported by Wakefield and colleagues may,’ the editorial said, ‘snowball into societal tragedies when the media and the public confuse association with causality and shun immunization. This painful history was shared by the UK (among others) over pertussis in the 1970s after another similar case series was widely publicized, and it is likely to be repeated all too easily over MMR. This would be tragic because passion would then conquer reason and the facts again in the UK’ (10).

To many the fact that the authors had identified 12 children with both a strange bowel disease and developmental disorder, and that nine of them became ill soon after being given an MMR, may seem like strong evidence of harm from the vaccine. But it isn’t. Let me try and explain why.

Oddly, I want to start by explaining a famous scam. You write a letter to 10,000 people offering free advice on investing in shares. You pick many different shares and by chance some will rise. You ignore all the people to whom you gave poor advice and write to the ‘winners’ offering further advice. Again by chance some will ‘win’. Once you have done this three or four times you write to the handful of people who have ‘won’ repeatedly asking them for money to invest. You then abscond with their money. To the ‘winners’ you seem to have remarkable predictive powers. They know nothing of the nearly 10,000 who received poor advice.

Hundreds of thousands of children every year are vaccinated with MMR. Similarly hundreds of thousands have rashes, tens of thousands have fits and thousands will develop behavioural disorders. Inevitably therefore many children will by chance experience problems they would have experienced anyway within hours and days of being vaccinated. To the parents — just like the ‘winners’ in the scam — it is hard to believe that what has happened to their child is the result of chance.

Such parents may search the internet for information and stand a high chance of encountering a group like that of Wakefield and others who are interested in the problem. It thus isn’t so strange that the group managed to collect a series of patients. The difficulty of working out whether or not vaccination might be causing the autism is further complicated by something epidemiologists call ‘recall bias’. If you know that A may be associated with B and then experience A, you are much more likely to remember that B happened to you than somebody who hasn’t experienced A.

When I was editor o f the BMJ, for example, we caused a great fuss in France, which depends heavily on nuclear power, by publishing a study — by French researchers funded by the French government — suggesting that walking on a beach near a nuclear reprocessing plant might make people more likely to develop leukaemia (11). The study asked questions of people who had leukaemia and controls who didn’t. There was already a suspicion that these plants might cause leukaemia. The fact that those who had the disease were more likely to report having been on the beach might thus be because they really had been there more often or because they were more likely than the controls to remember that they had.

The alternative explanations of the findings of Wakefield and others do not mean that MMR and autism are not associated, but they do mean that the evidence is weak- too weak, many say, to deserve publication in the Lancet, a journal that has space for less than 10% of all the studies submitted to it. Did the Lancet publish because it relished the thought of the massive media coverage that would follow publication? That is the suspicion of many.

Extensive media coverage did follow and has continued to follow, and immunization rates have dropped. Some parts of the media — including interestingly the satirical magazine Private Eye — have taken up the cause of linking MMR and autism. They smell a conspiracy by the medical establishment. Between the time when I wrote the first draft of this chapter and came to revise it, Melanie Philips (‘a top Mail writer’) published a much hyped series of articles in the Daily Mail based on three month’s study of the problem. She has no doubt that MMR causes autism, that epidemiology is hopeless and that the ‘pro-MMR’ researchers all have impossible conflicts of interest.

The Medical Research Council investigated Wakefield’s work and found it severely wanting. He has lost the funding for his research and has become something of a pariah among doctors. Now he has moved to America, funded by wealthy people who believe strongly in his work, and sparked anxiety about the MMR vaccine there.

This long running story took another twist in 2004 when it emerged that Wakefield had failed to declare a conflict of interest. He was being paid to see if there was any evidence to support possible legal action by a group of parents who claimed their children were damaged by the MMR vaccine. Richard Horton, the editor of the Lancet, conducted a rapid examination of the evidence and declared that he would not have published the paper if he had known of the conflict of interest (12). Subsequently 10 of the 13 authors of the original study retracted the interpretation that MMR and autism might be linked, something that the original study did not state anyway (13).

Many studies have been published that do not support a link between MMR and autism. Those who have autism are not more likely than others to have had the vaccine. The introduction of the vaccine was not followed by a surge of cases of autism (although interpretation of these studies is complicated by the fact that autism has been increasing, probably because it is more often diagnosed rather than it is increasing in reality). These studies have been published in all the major journals, including the Lancet. But it is hard, even impossible, to prove a negative. One black swan will show that some swans are black, but 10,000 white swans do not prove that there are no black swans.

The battle has gone to the highest in the land with the prime minister being questioned on whether or not his baby son had been given the vaccine. He said that it was a private matter but that his government continued to recommend that all children receive MMR. The fact that he wouldn’t answer was seen by some of the media as an admission that his son hadn’t received the vaccine and that the government had ‘secret knowledge’ that the vaccine was harmful. Row upon row of prominent figures in healthcare — chief medical officers, chief nursing officers and presidents of medical and nursing colleges — have lined up to say that there is no evidence that the vaccine causes autism, but immunization rates have gone down in several countries. There have as a result been outbreaks of measles, and sometimes measles causes severe and lasting damage to the brain. The view of those in public health is that a great deal of damage has been done by the Lancet publishing the original study.

A sideline in the debate has been whether parents should be allowed to ask for their children to be given the vaccines separately rather than all together in the MMR vaccine. The thinking, which has no evidence to support it but ‘feels right’ to many, is that a baby’s immune system cannot cope with three vaccines at once. The government is in difficulty here. If it allows parents a choice (and patient and parental choice is politically fashionable) then it seems to be admitting that there are problems with MMR. I wrote an editorial arguing that doctors had learnt to go along with what they see as the irrational choices of individual patients — so why shouldn’t public health authorities do the same for populations (14)? I also pointed out that there is evidence that excessive reassurance is counterproductive. This editorial caused consternation among some doctors but had no discernible impact on government policy or immunization rates.

And was it the Lancet that caused this problem? Wakefield and others would have got their work published somewhere (authors always can, another failing of journals), and maybe it wasn’t the study that caused the problem. The idea that MMR might be dangerous was already abroad, and the history of people being anxious about vaccination goes right back to when Edward Jenner first used vaccination — against smallpox — at the end of the 18th century. It could have been that there would have been a decline in rates of MMR vaccination even if the Lancet hadn’t published the paper. Public health people don’t accept this. They pin the blame on the Lancet. It gave its valuable imprimateur to the work, created the problem and did itself much harm in the process.

I’m often asked if I would have published the paper. People expect me to say, ‘Of course not’, but I usually demur. First, it’s easy to be wise with hindsight. Second, I know that choices on publication are inevitably somewhat arbitrary. All journals, including the BMJ, publish studies that tum out to be nonsense. Third, the articles in the Lancet were cautious even if much of the subsequent media coverage was not. Fourth, there is a trade off between what’s scientifically exciting and clinically useful. If it had turned out (or does tum out) that MMR and autism are linked, the Lancet would have got a ‘first’, something that is important not only to journalists but also to scientists. The Lancet has traditionally been concerned with the scientifically new and exciting. It’s one of the reasons people read the Lancet. The BMJ, in contrast, is more concerned with studies that have a direct clinical or public health message, making it, some say, much duller.

I’m not sure whether the Lancet did the right thing or not, but I am sure that this case illustrates powerfully that what journals publish and the ethical issues that arise in making those decisions can have a broad impact on peoples’ lives. Many people must have worried whether or not to have their children vaccinated and whether or not a problem with their child, perhaps even autism, might have been caused by MMR.

Some extremists would say that the Lancet has blood on its hands. I too have been accused by a knight of the realm and a fellow of the Royal Society, Sir Richard Peto, of ‘killing hundreds of thousands’. The BMJ was about to publish a major paper from Peto and colleagues showing the power of aspirin and similar drugs in preventing deaths from cardiovascular disease. Peto is convinced that far more patients should be taking aspirin and that many are dying unnecessarily young because they are not taking the drug. He has very strong evidence to support his view, but we were about to accompany his paper with a commentary that was sceptical (16). It was through publishing this commentary that I would kill hundreds of thousands because doctors would be given a reason, an excuse, not to put their patients on aspirin. I can’t say that I have hundreds of thousands of deaths on my conscience, but it shows that somebody of his huge intelligence rates the power of journals higher than I do.

(In the same interchange Peto said that he had looked at the principles of the Committee on Publications Ethics [COPE], which I helped to found [and which is discussed in chapter 8], and discovered that I’d broken every one of them. That gives me another qualification for writing this book.)

My second example also comes from the Lancet, and the editor of the Lancet before Richard Horton, Robin Fox, says that when he dies they will find ‘Bristol Cancer Help Centre study’ written on his heart (17, 18). This is why.

The Bristol Cancer Help Centre offered complementary or alternative therapy to patients with cancer. It was praised by the Prince of Wales, and there was naturally discussion over whether or not its treatments were effective. Britain’s two cancer charities — the Imperial Cancer Research Fund and the Cancer Research Campaign (as they were then called — funded a study to find out. This study was undertaken in the late 1980s and published in the Lancet in 1990 (17). It was a time when the mutual suspicion between orthodox and complementary practitioners was not as strong as it was in earlier times but was stronger than now. The Bristol centre and the patients visiting it cooperated with the trial.

Heather Goodare, a singer and now a friend of mine, was a participant in the trial. She had had breast cancer and found the advice and support offered by the Bristol centre very helpful. She provided information with enthusiasm — and so was both devastated and angry when she switched on the television news one night and heard the researchers announce that those who visited the Bristol centre were likely to die sooner than those who didn’t. She was devastated because what seemed to her a good experience might be reducing her chances of surviving. She was angry because she expected to hear the results of the trial directly from the researchers, not through the television news. But, as is often the case, that terrible experience led to much that was good, for her, patients in general and journals.

The study compared what happened to 334 women with breast cancer who went to the Bristol centre with what happened to 461 women with breast cancer attending a specialist cancer hospital or two district general hospitals. The authors found that the women attending the centre were roughly twice as likely to die as those simply attending traditional hospitals. (It’s important to point out that the women attending the Bristol centre were also cared for by traditional hospitals.)

Making a comparison like this is hard because the two sets of women will be different in many ways — in age, social class and backgrounds, types of cancer, and the extent and seriousness of their cancers. The women will also be different in ways that are not easily measured — like personality and attitude. Furthermore, the information was gathered from hospital notes, which are not a reliable source of information: important information is often missing or wrong. Statisticians can attempt to compensate for the differences in the two groups and the missing information, but the conclusions will at best be tentative. Any differences found may be due to the women being different rather than differences in their treatments.

Unfortunately those who presented the conclusions at the press conference were far from tentative. I wasn’t there, but the BMJ editor who went said that those giving the press conference positively delighted in the bad results for the Bristol centre. Complementary medicine had been shown not just to be useless but also dangerous. The results reverberated around the world, hitting the mass media before the Lancet‘s usual embargo of 12:01 am on a Friday morning. The BMJ‘s report on the study has, I must confess, been described by Heather Goodare as ‘particularly lurid’ (19, 20). The women who had participated willingly in the study were forgotten in the rush for publicity.

Once the results were fully published they could be critically appraised, and it soon became clear that there were severe deficiencies. Sir Walter Bodmer, Director of Research at the Imperial Cancer Research Fund, wrote to the Lancet in 1990 to say, ‘Our own evaluation is that the study’s results can be explained by the fact that women going to Bristol had more severe disease than control women’ (21).

But harm had already been done. After publication of the study the number of patients attending the Bristol centre fell dramatically and the centre nearly went into receivership. Furthermore, one of the study’s authors, Professor Tim McElwain from London University, killed himself two months after the study was published. We can’t know what part the study played in McElwain’s suicide, but the study and his suicide are forever yoked together.

Many of the women who felt abused by the study formed themselves into an action group, the Bristol Study Support Group. One of their targets was the Lancet. They wanted the study ‘retracted’ from the scientific literature. Retraction is a process that indicates that the results of a study cannot be believed — although, ironically, retracted studies continue to be cited by other researchers. Retraction is usually, however, reserved for studies that are proved to be fraudulent. The potential problem with retracting studies that are ‘wrong’ is that given enough time almost everything might have to be retracted. Better, many would say, for them just to be forgotten. Neither the Lancet nor the cancer charities thought that there had been any fraud in the Bristol study. Gordon McVie, scientific director of the Cancer Research Campaign, said: ‘Our view is that the researchers made an honest scientific mistake during their analysis of their findings’ (23).

The authors did not want their study retracted, and the Lancet felt unable to do so. The journal did respond, however, by raising its standards of statistical review.

The support group suspected that the results had not arisen from simply ‘an honest scientific mistake’. They wanted the original data to be re-analysed by somebody independent of the authors and the charities, but this has never happened. Ethical arguments over who owns data are intense. Many take the view that they belong in some sense to the patients and also that they are a public good, not least because their collection is often funded with public money. Lots of researchers believe, however, that the data belong to them — because they have done the arduous work of collecting them — and are anxious about them being misused by others. There is also an element of competition.

But researchers should be willing, or even obliged, to hand over data when there are anxieties about possible misconduct. The BMJ and some other journals write into their guidance to authors that a condition of submission is that authors must be willing to make their data available.

Because the support group got so little satisfaction from the Lancet and the cancer charities they compiled a dossier and took it to the Charity Commission, the body in Britain that oversees charities. The commission eventually censured both charities for inadequacies in their mechanisms for supervising and evaluating research (24). But the main good that came out of this episode was an added impetus to involving patients much more in the process of research. It is becoming normal to keep participants in trials informed of how the trial is progressing and to present the results to them first. Even more importantly patients are increasingly involved in the planning, designing and performing of research. The guineapigs are taking over the experiments. I discuss the often tense relationship between medical journals and patients further in chapter 13.

A study that some call ‘the BMJ‘s MMR paper’ was concerned not with a clinical but rather with a health policy matter (25). It was a comparison by three authors based in California (although one is English) of the costs and effectiveness of Britain’s National Health Service (NHS) and Kaiser-Permanente, a California-based health maintenance organization. (A health maintenance organization provides complete healthcare for a fixed annual sum.) Comparisons between healthcare systems are difficult and this study was unusual in that it was a ‘broad brush’ study. Its general conclusion was that the costs of the two systems were of a similar order but that on many measures — time to wait to see a doctor or to have an operation, immunization rates — Kaiser performed considerably better. This was counterintuitive. The NHS has been widely regarded, particularly by people in Britain, as highly efficient, and the American system in general as profligate (although Kaiser is a particular and unusual part of it). One message was that the problem with the NHS was not just money.

Just like the MMR study, this study received wide media coverage and the British government was very interested in it, but it was upsetting to the many supporters of the NHS. Severe criticisms of the study flowed in from the moment it was published (26, 27). The method for adjusting for purchasing power was ridiculous. The assertion that the populations served by the two healthcare systems were similar was misleading. There were, critics said, many problems with the study, and the BMJ had made a bad mistake in publishing it. Some critics suggested to me that the study was fraudulent. The study would push the government towards making changes that would be damaging to the NHS (27). Ultimately, the harm to healthcare in Britain (and perhaps other countries that were misled by the study) might be more severe than the harm caused by the MMR paper.

I’m unconvinced and unrepentant, but I do believe that the study could have profound consequences. Indeed, the NHS has been studying Kaiser hard, and one study that has resulted supports some of the conclusions of the earlier study (21). Those who love a conspiracy theory were excited by me leaving the BMJ and joining an American healthcare company that was trying to establish a business in Britain. Had I published the paper in order to further my own interests? So here is another example of a journal perhaps having an influence on the lives of many in ethically questionable circumstances.

JAMA — formerly the Journal of the American Medical Association — has had its share of dramatic and ethically dubious publications, with one culminating in the firing of the editor. George Lundberg was fired not for publishing but for speeding up publication of a paper that showed that many American students did not think of oral sex as sex (29). This undramatic finding gained notoriety because of the impeachment of President Clinton, where one of the issues was what exactly had happened between him and Monica Lewinsky in the Oval Office. I discuss this episode in chapter 12 on editorial misconduct.

But an equally controversial episode occurred a decade earlier when JAMA published an account of a tired junior doctor killing a 20-year-old patient who was terminally ill with ovarian cancer (30). The paper — which provoked a huge and stormy debate within the journal, the mass media and the journal’s owner, the American Medical Association — is remarkable for its brevity and bluntness. The 500-word piece, which was anonymous, is a first person account of a doctor killing a patient, whom he had never previously met. Called ‘in the middle of the night’, he (or perhaps she) encounters a patient with ‘unrelenting vomiting’ and ‘severe air hunger’ who ‘had not eaten or slept for two days’. It was, the author wrote, ‘a gallows scene, a cruel mockery of her youth and unfulfilled potential. Her only words to me were, “Let’s get this over with.”‘ The doctor draws up some morphine, ‘enough, I thought, to do the job’. After the injection ‘with clocklike certainty’ the patient stopped breathing. ‘It’s over, Debbie’, is both the last line and the title.

JAMA received some 150 letters, which were four to one against the physician’s actions and three to one against JAMA for publishing the piece (31). Euthanasia is a subject that generates great emotion. In Britain it has taken over from abortion as the subject most likely to produce a hate-filled postbag. Doctors’ organizations — including the British Medical Association (BMA) — tend to be strongly against euthanasia, whereas doctors themselves are much more ambivalent. Most doctors (including me in my limited clinical experience) have given injections to make patients more comfortable, knowing that a ‘side-effect’ will be the death of the patient. Euthanasia is thus a tricky subject for medical journals owned by medical associations. An editorial I published encouraging continuing debate — rather than supporting euthanasia — led to several letters to the BMA hierarchy calling for me to be fired (32). Almost 10 years later we published an editorial that argued the case for euthanasia (33) — but not long after publishing an editorial that argued exactly the opposite (34). The New England Journal of Medicine — owned by the Massachusetts Medical Society — has been bolder in its support of euthanasia (35).

But more relevant to this book than the question of whether or not euthanasia should be supported is the question of whether or not JAMA should have published the article as it did. The article described a criminal act. Can it be right for editors to publish accounts by (admittedly untried) criminals of their acts? George Lundberg argued that both the First Amendment of the Constitution of the United States (which says that congress will not pass a law ‘abridging the freedom of speech, or of the press’) and Illinois state law (JAMA is published in Chicago) supported his position. His position was legally challenged — but without success. Lundberg recognized that JAMA was ‘in conflict with another powerful ethical obligation, that of a physician reporting another physician suspected of wrongdoing’. He then argued that the journal didn’t know ‘whether this is a clear case of wrongdoing’ and that it ‘may effectively be hearsay’. This raises the possibility — widely believed by many — that the whole thing was a hoax (36).

JAMA believed that the piece was a hoax then presumably it would not have published it. If it didn’t believe it was a hoax, then the author was describing intentionally killing a patient. How could it be hearsay?

Debate also raged over how JAMA published the piece. There was no editorial comment and no disclaimer. Didn’t this mean that JAMA approved of euthanasia?

Mightn’t it also mean that the American Medical Association, also approved? Lundberg — correctly to my mind — pointed out that the act of publication does not imply editorial support. Most journals are full of contradictory views. By definition, they can’t all be editorially supported. The BMJ has carried many letters arguing that I am an idiot. That argument doesn’t always have my editorial support. As the journal doesn’t automatically agree with what it publishes it follows that neither do the owners.

But shouldn’t JAMA have published the piece with some ethical commentaries? Many thought so, but Lundberg said that JAMA decided not to in order ‘to avoid stultifying debate’. That seems reasonable to me. So long as the correspondence columns are open to all it makes sense to publish short dramatic pieces that stimulate debate. Balancing every comment every time makes for a dull journal. I’ve many times experienced how short, purple pieces with little supporting evidence promote debate in a way that well thought out, balanced and thoroughly referenced articles do not. Journals need both types of articles.

I assume that at least George Lundberg knew the identity of the author of ‘It’s over, Debbie’, but the BMJ published a letter where we did not know the identity of the author. Your first reaction will probably be that this is unsupportable. How could we possibly know if the piece was genuine (although the same can be said of pieces that are signed)? We published such a letter — embedded in an editorial — because we were able to check the broad facts (37). The letter concerned cheating at medical school and is worth republishing here:

Dear Sir,

I am a graduating student of Royal Free and University College London Medical School. During the finals of clinical exams I was witness to one of the most ugly scenes in my short but eventful life. One of my colleagues had in a brazen attempt to obfuscate the examiners made use of her Oxford Clinical Handbook during her long case. Unfortunately (or fortunately) for her, she was caught red handed. The deed was not looked on kindly by the authorities, especially when she attempted to extricate herself by claiming she had also done this in a previous examination and not been caught thereby (or so she believed) justifying her act … My colleagues and I were convinced that she would receive her comeuppance.

After meeting the disciplinary board, however, she was allowed to pass her exams without further ado. Fair play and honesty, two virtues I have always believed in, have been made monkeys of again. In future perhaps we should all do as she did. After all, look where it’s got her.

The examining committee, the subdean told us, had decided to let her graduate but had held back distinctions she might have won. We wrote that we understood why the committee had done what it did, but we thought it right that we should publish the story and point out mistakes that the committee had made. ‘The problem with cheating,’ our editorial said, ‘is that it destroys trust. Somebody who can cheat can also lie. Suddenly everything is uncertain’ (37). The biggest mistake of the committee was that it hadn’t explained its actions to the rest of the students. The committee had also failed to consider the broader context — of medicine in Britain ‘being in the dock’ after a series of scandals and failures.

But did we do the right thing to publish? Wasn’t this tittle tattle? Weren’t we after sensation? Mightn’t we damage the student? Many readers thought so, as they made clear in over 100 letters to the editor (38). We were undermining British medicine when it needed building up. But more readers thought that we had done the right thing, agreeing with us that justice didn’t just have to be done, it had to be seen to be done. One unexpected consequence was that we uncovered several other examples of cheating at medical school. Two students who had just graduated wrote and told us that they were bothered by the fact that they — and many others in their class — knew the content of an exam paper in advance. We encouraged them to tell their medical school, which they did.

This article didn’t stimulate anything like the media storm of ‘It’s over, Debbie’, but there was international coverage. Medical journals in the two cases have acted questionably to raise issues — euthanasia and the trustworthiness of doctors — that matter to the world at large.

The Lancet — yet again — provides a still more dramatic example of raising an issue that matters to the world by deciding to publish some very weak — some would say meaningless — research on the effects of genetically modified foods (39). The production and sale of such foods has prompted huge controversy in some, but not all, countries. There was considerable public anxiety about genetically modified foods in Germany long before it appeared in Britain, but anxiety swept through Britain a few years ago. Currently, there is much less anxiety in the United States, where genetically modified foods are common, but the government in Zimbabwe is so concerned about the safety of the foods that it prefers to allow people to go hungry rather than eat the foods.

The arguments over genetically modified foods are interesting in that the scientific establishment doesn’t think that there is any need for anxiety, whereas much of the public is unconvinced by the reassurances from both the scientists and the government. The public remembers too clearly the same parties insisting that there was no risk to humans from ‘mad cow disease’. The public makes its own judgements on risk and it may often be more right than the experts. In a sense there are no experts on risk — because risk is a combination of the likelihood of something happening (which experts are usually better equipped to calculate) and the ‘dreadfulness’ of that event. There are no standard measures of ‘dreadfulness,’ but the public must be the ultimate judge.

The Lancet stepped into the emerging controversy over genetically modified foods by publishing some research that showed changes in the intestines of rats fed genetically modified potatoes (39). This research had been trailed on television some 18 months earlier, and some of the media suggested that the scientific establishment was refusing to accept the results and was suppressing them. The Lancet published the results primarily to get them out into the open rather than because it believed that the results showed that genetically modified foods were unsafe. Indeed, it simultaneously published a commentary that said that: ‘The results are difficult to interpret and do not allow the conclusion that the genetic modification of potatoes accounts for adverse effects in animals’ (40). This is clearly so. It’s always hard to know what animal research means for humans (which is why the BMJ virtually never publishes animal research), and the meaning of the changes in the rats’ guts is impossible to interpret. There were too few animals in the study and no controls, making it impossible to know what would have happened to animals fed a similar diet that lacked the genetically modified potatoes.

The study was sent to six reviewers by the Lancet and some recommended rejection (41). The authors revised the paper three times before it was published. Nevertheless, this is a study that probably would not have made it into the Lancet in normal circumstances. The Lancet does have a tradition of publishing research that may be scientifically intriguing but that doesn’t allow conclusions that would matter to practising doctors, but this was a study on rats that was scientifically very weak. Was the Lancet yet again indulging its taste for sensation and publicity or was it acting responsibly by peer reviewing and putting on the public record research that had been widely discussed but seen by few? I like to believe the latter, although I don’t think that we would have published the research in the BMJ — just because it was too far removed from our sort of research.

Can it ever be right consciously to publish scientifically weak studies? (Journals regularly do so unconsciously.) Many seem to believe that a journal should publish only research that crosses its particular line of scientific worthiness. Top journals should publish only top research. Otherwise, a ‘stamp of approval’ may be given to an unworthy study. It’s a belief based on the false idea that peer review is an exact process that strictly ranks scientific studies. But it isn’t. As I discuss in chapter 7, peer review is a flawed and inevitably subjective process. It also has to consider many factors at once, including originality, clinical importance, scientific importance and validity. A journal may appropriately choose to publish a clinically important but scientifically weak study — if no better evidence is available. Similarly, there are circumstances where journals might publish very weak studies together with commentaries pointing out their weaknesses. One such set of circumstances is where the research is widely discussed but has been seen by few.

Many of the ethical difficulties of medical journals arise in their relationship with pharmaceutical companies (as I discuss in chapter 16). The study funded by industry that has caused the greatest difficulties in recent times is the VIGOR (Vioxx Gastrointestinal Outcomes Research) study, which was published in the New England Journal of Medicine in 2000 (42). The study was a trial in which over 8000 patients were randomized to receive either naproxen, a long-established non-steroidal anti-inflammatory drug, or rofecoxib, a Cox-2 inhibitor that the manufacturers, Merck, hoped would have fewer gastrointestinal side-effects. There were sound theoretical grounds for expecting that this would be the case. The primary endpoint of the trial was gastrointestinal side-effects, and sure enough the patients given naproxen experienced 121 side-effects compared with 56 in the patients taking rofecoxib. This was a marvellous result for Merck and contributed to huge sales of rofecoxib. Merck reportedly bought a million reprints of the article from the New England Journal of Medicine to use in promoting the drug. (My estimate would be that this must have meant several hundred thousand dollars of profit for the journal.)

The trial also showed an increase in myocardial infarction in the patients given rofecoxib (0.4%) compared with those given naproxen (0.1%). This was an unexpected result and the difference was interpreted to be caused by naproxen having a protective effect. In September 2004 Merck withdrew the drug from the market when it became clear that rofecoxib did have serious cardiovascular side-effects.

It subsequently emerged that the VIGOR article ‘did not accurately represent the safety data available to the authors when the article was being reviewed for publication’ (43, 44). These data showed that there were 47 confirmed serious thromboembolic events in the patients given rofecoxib and 20 in those given naproxen — so wiping out the gastrointestinal benefits from rofecoxib. There were also three extra cases of myocardial infarction in the patients on rofecoxib that were not declared. If all of these data had been included in the original report then the interpretation that naproxen was protective rather than rofecoxib harmful would have been much less convincing.

The New England Journal of Medicine published an expression of concern in December 2005 and then reaffirmed it in March 2006 after giving the authors a chance to explain themselves (43, 44). But is the New England Journal of Medicine blameless in all this? It published the expression of concern at the end of 2005 because the problems with the study had emerged as evidence was gathered for a court case against Merck brought by patients who allege that they have been damaged by rofecoxib. The lawyers discovered that changes had been made in the submitted manuscript. The full data were, however, given to the Food and Drugs Administration (FDA) at about the same time that the article was published — and the data were on the FDA website. Shouldn’t the journal have picked up on these data and published a correction? Even if they didn’t at the time, shouldn’t they have done so as doubts began to be published about the safety of rofecoxib? And wasn’t it poor practice that only percentages of cardiovascular side-effects were given in the original report? Could it be that the editorial standards of the journal were conflicted in some way by the huge profits made from reprints of the original article?

In all my 25 years at the BMJ we were not involved in such dramatic happenings as the three cases I’ve described from the Lancet, the one from JAMA and the one from the New England Journal of Medicine. These are all cases where there has been worldwide impact from the studies accompanied by questions about the ethical behaviour of the journals. Neither the Kaiser study nor the cheating editorial had the same impact, but I want to tell just a few further stories from the BMJ to build my case that the wider world should be interested in the ethical behaviour of journals.

I thought this a few years ago as I spent most of a day traipsing from one television studio to another giving an account of a paper we had published on female sexual dysfunction (3). This was an unusual paper in that it was not research by scientists but rather research by an investigative journalist, Ray Moynihan. He argued in his paper that drug companies were playing a central part in defining sexual problems in women as a ‘disease’ with the implication that they might best be treated with drugs. The companies were, argued Moynihan, ‘disease mongering’. I found myself in some of my many interviews suggesting that ‘because drug companies were having problems creating new drugs they were turning their hands to creating new diseases’.

This story was covered by media right across the world and there can be little doubt that it got such wide coverage because it was published by ‘a prestigious medical journal’ (as journalists love to call journals when they want added weight for their stories). But should journals be publishing papers that use the methods of investigative journalism rather than the methods of epidemiology or molecular biology? Most editors of medical journals would probably answer no and certainly most journals don’t publish such pieces (although interestingly both Nature and Science, the two leading science journals, do — and it is these parts of their journals that are the best read, because everybody can understand them).

The paper on female sexual dysfunction was interesting in that it potentially affected the lives of most adults. Sex is a complex process and often unsatisfactory. Moynihan’s article described studies (funded by the drug industry and the main one published in JAMA (45)) that suggested that almost half of American women might have female sexual dysfunction, but they might be defined as dysfunctional if they sometimes didn’t want sex, didn’t enjoy it or found it uncomfortable. Should this be thought of as a disease? Should women seek treatment if they don’t want sex? Some of those who joined the debate said ‘Why not?’ Develop a female equivalent of Viagra and women can benefit from it as men have benefited from Viagra. Others drew analogies between sex and dancing: you need medical help to improve your dancing if you break your leg, but otherwise doctors and drugs have nothing to offer to improve your dancing. There were feminists on both sides of the argument.

The debate will rage on, but this is an example of medical journals — and drug companies — intruding into the lives of many. The BMJ did something similar when we ran an exercise on our website to identify ‘non-diseases’ (46). Over a hundred conditions were suggested and we then asked readers to vote for their top non-diseases. This caused outrage among some who argued that it was a cheap publicity stunt that mocked the suffering of many. I responded by arguing that it was primarily an exercise to alert people to the fact that diseases are not ‘out there’ like animal species waiting to be discovered but rather medical, social and even sometimes political constructs. We also wanted to point out that having your problem defined as a disease may not be the best way to deal with it.

One group who were particularly upset by this exercise were sufferers from chronic fatigue syndrome or myalgic encephalomyelitis (ME). (Even the name is disputed. Doctors prefer chronic fatigue syndrome and have produced an operational definition of it. Many patients prefer ME. In a spirit of conciliation I’ll now use ME.) They are an interesting group of patients who have in some sense been ‘at war’ with the medical establishment in general and medical journals in particular. They have entered the discourse that goes on in medical journals in a way that not many other patient groups have done. (Another group are those concerned with Munchausen-by-proxy.) Sufferers from ME think that their condition is not taken seriously by most doctors. In particular, they resent the contention of doctors that the problem has a psychological component and is not simply a physical condition (not that any condition, doctors argue, is ‘simply physical’).

Each year one of the ME organizations gives a prize for ‘the worst medical journal’. Usually it is won by the BMJ, sometimes tying with the Lancet. Some of those interested in ME have made complaints about me to various authorities. They saw the BMJ as pushing the line that the problem is psychological, using only advisers whom they despise (many of them psychiatrists), and publishing only research that supports our line. My line was that we don’t have a line. We took the best research that we could get and asked people who were recognized experts to write and review for us. We didn’t publish research because the results pleased us and we didn’t tell any of the experts what to write. Plus anybody could send us electronic letters and we posted all of those that were not obscene, libellous, incomprehensible, wholly unsubstantial or gave information on patients without their written consent. We posted and published on paper many contributions from people who were very critical of what we published on ME.

I am perhaps being disingenuous here. I can see that one view of ME — we might call it the orthodox view — did dominate in the BMJ and most (perhaps all) other major journals. The BMJ is the establishment. It favours particular methods. It has strong views that some sorts of evidence — for example, well done randomized trials — are superior to other sorts of evidence — for example, case reports. I often said — and repeat in this book — that the ‘BMJ is not in the truth business but the debate business’. I favour a postmodem view of the world, where there are many truths not one, but we didn’t practise that view consistently with the BMJ. Almost anything might go in electronic letters, but anything did not go in the main body of the journal.

My next example is strange. I read in the Independent newspaper of 10 June 2000 the headline ‘BMJ admits “lapses” after article wiped £30m off Scotia shares’ (47). Could a BMJ mistake really have such dramatic consequences? The story began with the BMJ publishing a very short article that suggested that a new anticancer drug that was being tested might cause skin burns in 40% of patients (48). The drug — temoporfin (Foscan) — accumulates in malignant tissues and then is activated by light to destroy the tissue. We published the article to alert readers to this possible side-effect. Medical journals face a difficult problem with such reports, which are usually based on a single case or a small series of patients. Journals know that they don’t get it right every time. Sometimes they publish reports of effects that turn out not to be ‘real’ and sometimes they reject reports that do turn out to be ‘real.’ We do know, however, that case reports in journals are an important means to identify adverse drug effects.

The central medical and scientific question was how common were serious burns. Another important question was whether or not it was the drug itself that had caused the high incidence of burns, the way it was given or some other explanation. But the major question for the shareholders of Scotia, the company that manufactured the drug, was whether or not the drug would get to market and produce a return on their investment.

The medical and scientific questions were not of great interest to general journalists because the drug was not even on the market. This was not a major public health issue. The financial journalists on the newspapers (not regular readers of the BMJ) could, however, be prompted to take an interest and this is what happened. Credit Suisse First Boston, market analysts, put out a release describing the BMJ article and wondering ‘what this report means for the approval, partnering and commercialization of Foscan’. ‘First impression,’ it continued, ‘is that this is highly negative.’ The release also said: ‘We have long been skeptical about the commercial value of Foscan.’ The share price of Scotia fell from 150p to 120p. Its highest price in the previous year was 230p, although it reached 800p in 1996 and dipped below 100p in early 1999. Share prices in biotech companies fluctuate greatly. Crucially the share price needed to reach 340p by March 2002 for a £50m bond issue to convert into shares.

Scotia responded by pointing out that the burns seemed to be much more common in this series of patients than in other series (49). It questioned how the drug had been given and threatened to sue the authors. The story was covered on the financial pages in seven newspapers.

The BMJ had not been blameless in all this. We failed to require the authors to include in the article the manufacturer’s data on the frequency of the side-effect. We did, however, post the information on our website the day after publication and we published letters from the company within a few days. Another problem was that the article did not include any statement on conflicts of interest from the authors. We failed to send them our standard form. The company tried hard to suggest that it was our ‘lapses’ that caused the problem rather than the fact that the drug had been associated with so many severe burns. After this episode the drug was denied a licence in both Europe and the United States, but later both jurisdictions did grant a licence.

This was in retrospect a storm in a teacup, but it generated great excitement at the time and at least suggested that medical journals could be so powerful as to wipe £30m off a company’s value with 150 words — that is, £200,000 a word. There are other examples of where publications in journals have had very dramatic effects on share prices, leading some to suggest that studies that might affect share prices, which are almost always to do with drugs, should be published in the Stock Exchange Bulletin rather than in a medical journal.

After I returned from Venice and before I left the BMJ I was embroiled in two of the biggest controversies of my time at the BMJ. One links to the previous story as it concerns an obituary of David Horrobin, the founder of Scotia (50). The obituary said: ‘The products [of Scotia] contained evening primrose oil, which may go down in history as the remedy for which there is no disease, and David Horrobin, Scotia’s former chief executive, may prove to be the greatest snake oil salesman of his age.’ It continued: ‘He often wrote about ethics, but his — or his company’s — research ethics were considered dubious.’ The obituary also made the point that Horrobin was unusually clever, charming, creative and charismatic, but Horrobin’s many friends were appalled. We received more than a hundred electronic letters condemning the obituary, and a complaint was made to the Press Complaints Commission. The complaint got as far as an adjudication which happens with only about 1.5% of the 2000 complaints made each year. The commission decided that no further action was necessary — partly because we had apologized to the family for the distress we had caused (not for publishing the obituary).

This episode does not support my case that medical journals can have strong effects on people’s lives — because it doesn’t concern the public health. But the case does raise important ethical questions? Can it be acceptable to speak ill of the dead? Is it fair to publish a defamatory piece a few days after somebody is dead when a libel action is no longer possible — because under English law you cannot libel the dead? If such a piece is to be published should it include evidence to support its assertions?

My other post-Venetian controversy was more directly relevant to this chapter. The BMJ published a study suggesting that passive smoking did not kill (51). The results did not please the antismoking lobby, of which the BMJ is a part, but the biggest problem with the study was that the authors were connected to the tobacco industry. The BMA, owners of the BMJ, put out a press release condemning the study as ‘flawed’, and the American Cancer Society, which some 40 years ago had started the study that was reported, said the same. Hundreds of rapid responses flooded into the BMJ‘s website, most of them furious that the BMJ had published the study but some delighted that the BMJ had not bowed to ‘political correctness’.

The study reported on what had happened to more than 100,000 adults from California who were first studied in 1959. It found that those people who did not smoke but were married to partners who did were not more likely to die of coronary artery disease, lung cancer or chronic obstructive lung disease (all diseases caused by smoking) than those who were married to partners who did not. The main flaw in the study, according to critics, was that most of the population smoked most of the time in the late 1950s and the early 1960s, meaning that almost everybody was exposed to lots of smoke. But strengths of the study are its large size, its long follow up, and the fact that the outcome was death. The weakness of the study may also in some ways be a strength. A worry about studies conducted after the link between smoking and disease became fully apparent — in the early 1960s — is that the middle classes stopped smoking in large numbers. So people who lived with smokers were likely to be poorer, less well educated and of lower social class — possibly explaining their excess mortality compared with people living with others who did not smoke.

I must not be too defensive here — because it is certainly conventional wisdom, supported by many studies (many of them published in the BMJ), that passive smoking increases premature mortality by about one-third. Nevertheless, the editors and reviewers who reviewed the controversial paper thought that it was asking a question that wasn’t completely resolved. The study then made it through our peer review system. And we had already decided that we would publish studies linked with the tobacco industry — so there was no reason not to publish (52).

Some American journals have adopted policies of refusing to publish studies funded by the tobacco industry. Their argument is that the industry is thoroughly untrustworthy and deliberately tries to obfuscate the scientific record. Furthermore, publication in an academic journal gives the mendacious industry respectability that is undeserved. We decided that it was to go too far to assume that every study funded by the industry was a lie, and that it would be antiscience to suppress systematically one source of research. We may be wrong and may have damaged public health by publishing the California study.

I could tell many more stories, but I hope that even the most sceptical reader will be convinced that medical journals can have extensive effects on ordinary people and that it is worth paying attention to their ethical behaviour.


References

  1. Wakefield AJ, Murch SH, Linnell AAJ et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis and pervasive developmental disorder in children. Lancet 1998;351:637-41.
  2. Laumann E, Paik A, Rosen R. Sexual dysfunction in the United States: prevalence and predictors. JAMA 1999;281:537-44 (published erratum appears in JAMA 1999;281:1174).
  3. Moynihan R. The making of a disease: female sexual dysfunction. BMJ 2003;326:45-7.
  4. Hudson A, Mclellan F. Ethical issues in biomedical publication. Baltimore: Johns Hopkins University Press, 2000.
  5. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. London: Little, Brown, 1991.
  6. Haynes RB. Where’s the meat in clinical journals? ACP Journal Club 1993;119:A23-4.
  7. Altman DG. The scandal of poor medical research. BMJ 1994;308:283-4.
  8. Shaughnessy AF, Slawson DC, Bennett JH. Becoming an information master: a guidebook to the medical information jungle. J Fam Pract 1994;39:489-99.
  9. Bartrip P. Mirror of medicine: a history of the BMJ. Oxford: British Medical Journal and Oxford University Press, 1990.
  10. Chen RT, DeStefano F. Vaccine adverse events: causal or coincidental? Lancet 1998;351:611-12.
  11. Pobel D, Vial JF. Case-control study of leukaemia among young people near La Hague nuclear reprocessing plant: the environmental hypothesis revisited. BMJ 1997;314:101.
  12. Horton R. A statement by the editors of the Lancet. Lancet 2004;363:820-1.
  13. Murch SH, Anthony A, Casson DH et al. Retraction of an interpretation. Lancet 2004;363:750.
  14. Smith R. The discomfort of patient power. BMJ 2002;324:497-8.
  15. Antithrombotic Trialists’ Collaboration. Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction and stroke in high risk patients. BMJ 2002;324:71-86.
  16. Cleland JGF. For debate: Preventing atherosclerotic events with aspirin. BMJ 2002;324:103-5.
  17. Bagenal FS, Easton OF, Harris E et al. Survival of patients with breast cancer attending Bristol Cancer Help Centre. Lancet 1990;336:606-10.
  18. Fox R. Quoted in: Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  19. Richards T. Death from complementary medicine. BMJ 1990;301:510.
  20. Goodare H. The scandal of poor medical research: sloppy use of literature often to blame. BMJ 1994;308:593.
  21. Bodmer W. Bristol Cancer Help Centre. Lancet 1990;336:1188.
  22. Budd JM, Sievert ME, Schultz TR. Phenomena of retraction. Reasons for retraction and citations to the publications. JAMA 1998;280:296-7.
  23. McVie G. Quoted in: Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  24. Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  25. Feachem RGA, Sekhri NK, White KL. Getting more for their dollar: a comparison of the NHS with California’s Kaiser Permanente. BMJ 2002;324:135-41.
  26. Himmelstein DU, Woolhandler S, David OS et al. Getting more for their dollar: Kaiser v the NHS. BMJ 2002;324:1332.
  27. Talbot-Smith A, Gnani S, Pollock A, Pereira Gray D. Questioning the daims from Kaiser. Br J Gen Pract 2004;54:415-21.
  28. Ham C, York N, Sutch S, Shaw A. Hospital bed utilisation in the NHS, Kaiser Permanente, and the US Medicare programme: analysis of routine data. BMJ 2003;327:1257-61.
  29. Sanders SA, Reinisch JM. Would you say you ‘had sex’ If…? JAMA 1999;281:275-7.
  30. Anonymous. lfs over, Debbie. JAMA 1988;259:272.
  31. Lundberg G. ‘lfs over, Debbie,’ and the euthanasia debate. JAMA 1988;259:2142-3.
  32. Smith A. Euthanasia: time for a royal commission. BMJ 1992;305:728-9.
  33. Doyal L, Doyal L. Why active euthanasia and physician assisted suicide should be legalised. BMJ 2001;323:1079-80.
  34. Emanuel EJ. Euthanasia: where The Netherlands leads will the world follow? BMJ 2001;322:1376-7.
  35. Angell M. The Supreme Court and physician-assisted suicide-the ultimate right N Eng J Med 1997;336:50-3.
  36. Marshall VM. lfs almost over — more letters on Debbie. JAMA 1988;260:787.
  37. Smith A. Cheating at medical school. BMJ 2000;321:398.
  38. Davies S. Cheating at medical school. Summary of rapid responses. BMJ 2001;322:299.
  39. Ewen SWB, Pusztai A. Effects of diets containing genetically modified potatoes expressing Galanthus nivalis lactin on rat small intestine. Lancet 1999;354:1353-4.
  40. Horton A. Genetically modified foods: ‘absurd’ concern or welcome dialogue? Lancet 1999;354:1314-15.
  41. Kuiper HA, Noteborn HPJM, Peijnenburg AACM. Adequacy of methods for testing the safety of genetically modified foods. Lancet 1999;354:1315.
  42. Bombardier C, Laine L, Reicin A et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Eng J Med 2000;343:1520-8.
  43. Curfman GO, Morrissay S, Drazen JM. Expression of concern: Bombardier et al., ‘Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis.’ N Eng J Med 2000;343:1520-8. N Eng J Med 2005;353:2813-4.
  44. Curfman GO, Morrissey S, Drazen JM. Expression of concern reaffirmed. N Eng J Med 2006;354: 1193.
  45. Laumann E, Paik A, Rosen A. Sexual dysfunction in the United States: prevalence and predictors. JAMA 1999;281:537-44 (published erratum appears in JAMA 1999;281:1174).
  46. Smith A. In search of ‘non-disease.’ BMJ 2002;324:883-5.
  47. Hughes C. BMJ admits ‘lapses’ after article wiped £30m off Scotia shares. Independent 10 June 2000.
  48. Hettiaratchy S, Clarke J, Taubel J, Besa C. Bums after photodynamic therapy. BMJ 2000;320:1245.
  49. Bryce A. Bums after photodynamic therapy. Drug point gives misleading impression of incidence of bums with temoporfin (Foscan). BMJ 2000;320:1731.
  50. Richmond C. David Horrobin. BMJ 2003;326:885.
  51. Enstrom JE, Kabat GC. Environmental tobacco smoke and tobacco related mortality in a prospective study of Californians, 1960-98. BMJ 2003;326:1057-60.
  52. Roberts J, Smith A. Publishing research supported by the tobacco industry. BMJ 1996;312:133-4.

Comments on Why Bother With Medical Journals and Whether They Are Honest?

Comments

Comment thread URL copied!