Skip to Content

Publishing Too Much and Nothing: Serious Problems Not Just Nuisances

ByRichard Smith, former editor of the BMJJune 30, 2020

The following excerpt is reprinted with permission from The Trouble With Medical Journals (Taylor & Francis, 2000).

‘Redundant publication’ means republishing material that is closely related to material already published. There are many circumstances in which this is perfectly acceptable if the connections between the papers are made explicit. Often, however, researchers repeatedly publish closely related papers without making clear the connections. This might seem to be simply impolite and not a serious problem. Indeed, that’s how most academics view redundant publication, but I want to try and convince you that it’s an important problem —because it introduces a bias into medical evidence. The result may be that treatments seem more effective than they are, misleading doctors and patients. Bias also arises from failing to publish studies altogether, another common problem.

Academics, as I keep repeating, gain credit from publishing. The credit comes as much — and possibly more — from the quantity than from the quality of publication. There is thus a strong incentive to slice up studies into the smallest possible unit in order to maximize the number of publications that might result from any piece of research. This is known as ‘salami’ publication. You may also benefit from publishing the same material repeatedly.

You publish a case report of a patient with a new condition. Then you publish a series of three patients followed by another paper describing 20 patients. Next you might make a comparison with another group of patients or give an account of how the condition affects the kidneys, then the heart. Perhaps you participate in an international study of the patients. The Australian results are published in an Australian journal, the Brazilian results in a Brazilian journal, and so on for 15 countries. You, as a world authority and discoverer of the condition, are an author on all of the papers. You review all your studies, sometimes on your own initiative, sometimes at the invitation of editors. Very soon you can have dozens of publications, a professorship, an international reputation and invitations to international conferences in exotic places. (‘A successful scientist,’ a longstanding joke goes, ‘is one who converts data into airmiles.’) At each conference you give closely related papers that are published in supplements to journals.

When somebody attempts a systematic review of your work he or she will become very confused. Are the same patients being described repeatedly? What is new material and what is old material being recycled? The confusion is particularly profound if the author has obscured the links. The reviewer finds what he or she thinks is a new study but takes time to realize that it’s the same as a study reported in another journal. Worse, the studies are often not by one person but by several. Authors are trading authorship. Sometimes the same studies have the same authors, but often the authors appear in different combinations and different orders.

Medical evidence is in this way polluted. As I hope I’ve made clear in chapter 6, medical studies are hard to do and to interpret. Making sense of medical evidence — or as it’s pretentiously called, the medical literature — is hard if everything is optimal, but this pollution makes the job much harder.

I am perhaps being too cynical. A minority of researchers are wickedly and consciously inflating their publications through deception, but it requires both concentration and integrity to minimize rather than maximize the number of papers you publish from a given body of work. There is the ever present pressure — from your head of department, the university, your own insecurity — to publish more, and there are what sound like good reasons for publishing more rather than fewer papers. Too much material in one paper will make it indigestible for readers. You need to reach different audiences with papers with different emphases in different journals. It would be impolite to your charming host at that conference in Rome to refuse to write a paper. Your lecture in Beijing had enough new material to merit a paper, and the new material would not make sense if you didn’t include a considerable amount of the material you had already published.

Another group with an interest in material being republished is the pharmaceutical industry. Research and marketing have become intertwined, as I will discuss further in chapter 16, and the publication of a trial favourable to a company’s product in a major journal is worth hundreds of thousands of dollars spent on advertising. Furthermore, big trials cost millions of dollars to perform. Companies would thus like to see as many publications as possible coming out of trials. Many trials are conducted in several countries at once. As well as publishing the overall results in an international journal it might seem reasonable to publish the German results in a German journal, the French in a French journal, and so on. The German results will have an author who is an acknowledged leader in Germany, prompting German doctors to pay close attention to and trust the work. Sometimes the German and French results will be combined. Sometimes there will be yet another combination. It might so happen that emphasis will be given to the more favourable results. Favourable comments by leading doctors and researchers are invaluable, and they thus receive many invitations to speak at conferences, some of them huge, which are mostly funded by the industry. The industry also funds supplements to journals to report material presented at the conferences. These reports commonly include recycled material.

Editors and journals tend to see themselves on the side of the angels in this vexed issue of redundant publication, but that is to oversimplify. Although some journals are overwhelmed with material, many are not — and are by no means unwilling to publish material closely related to material already published. Many journals are keen to have the ‘top experts’ write for them and don’t worry too much if the ‘top expert’s’ paper is not much different from 50 he has already published. Supplements can be an important source of revenue and profit to journals and commonly contain recycled studies. The BMJ when I was editor was involved in republishing material. We had local editions of the BMJ in many countries and regions, including the United States, China and South Asia, and these reproduced material published in the weekly BMJ. We always, however, provided a reference to the original publication, and only the original publication is referenced in databases like Medline.

Editors and authors often tussle over whether papers are redundant or not. Editors see redundancy where authors see effective communication. Arguing over the degree of overlap and how much it matters can be fruitless, but what is important is transparency. Ideally, when submitting a study authors should send editors at the same time copies of any related material, published and unpublished. In addition, the manuscript should make clear any links to other material, particularly published material. And this should be done not with a reference half way through the discussion of a paper but with a clear statement at the beginning.

With such openness editors cannot accuse authors of misconduct. They may, however, decline to publish the paper submitted.

Often, however, authors are not open, and various studies have suggested that something like one-fifth of medical studies are redundant (177-180). In other words, redundant publication is common. It is the form of misconduct most commonly seen by the Committee on Publication Ethics. But does it matter? The world has tended to see it as sloppiness, a minor misdemeanour, but might it be more?

A group from Oxford have provided the most compelling evidence on how redundant publication can be misleading and potentially dangerous. They conducted a systematic review of the effectiveness of a drug called odanstetron in reducing the sickness that patients commonly experience after an operation (178). The group found 84 trials that included information on 11,980 patients. But when they looked closely they found that there were in reality only 70 trials and 8645 patients. In other words, 17% of the studies had been published more than once and the number of patients had been inflated by 28%. The Oxford group confirmed this duplication of results by going back to the original authors. The published papers did not make clear that trials had been published more than once. The duplication was covert.

The reviewers had to work hard to spot the duplication. One trial had been conducted in several centres, which is very common for trials of drugs. The results from the overall trial had been published, but then researchers from four centres had published their part of the trial. These papers all had different authors, adding to the difficulty of spotting that they were the same trial. It’s easy to understand the temptation of both the authors and the pharmaceutical company to publish in this way. The authors get a publication to themselves, and the companies have the benefits of their drug publicized four times — in different journals and probably different countries.

In addition, four pairs of identical trials, the Oxford group found, were published by completely different authors without any common authorship. This has to be misconduct. I discussed in chapter 9 how authorship carries both credit and accountability. Readers and editors need to know who did the work. Clearly in these reports important information is missing. The work reported is the same but the authorship is completely different. This is deception.

Equally worrying was the Oxford group’s finding that duplicate reports used different numbers of patients or patient characteristics from the original. In another trial the sex distribution was different in the two reports. We will all be inclined to think that this is sloppiness not misconduct, but such discrepancies are always worrying.

Thus far this study from Oxford had confirmed what we already knew — that redundant publication is common, that the redundant studies often don’t refer to each other, and that often the authorship is different. The group then went on to look at which studies were most likely to be duplicated, and — perhaps you’ve guessed already — it was the ones with the most positive results. The group presented the results as ‘number needed to treat’ — which means the number of patients you needed to treat in order to stop one patient from vomiting. (Clearly the lower the number the more effective the treatment. This is a measure that seems to be useful for both doctors and patients. Often dozens of patients need to be treated in order to prevent one death, heart attack, or whatever.) The number needed to treat for the trials that were not duplicated trials was 9.5, while it was 3.9 for the duplicated trials. In other words, the duplicated trials suggested that the drug was more than twice as effective. If all the trials were combined without duplication the number needed to treat was 6.4, whereas if reviewers had combined all the trials without spotting the duplication the apparent number needed to treat improved to 4.9. The effectiveness of the treatment was overestimated by one-quarter.

Authors of systematic reviews — which, as I’ve discussed, are widely regarded as the best evidence on which to base decisions about treating patients — do not usually go to the lengths of the Oxford group to exclude results that are published more than once. It was a major undertaking to do so, made much more complicated by the lack of cross references and the change in authorship. Systematic reviews may thus be routinely misleading patients and doctors because of redundant publication, which is why redundant publication is serious.

Perversely, I believe, some people see this as a problem not of redundant publication but of systematic reviews. It’s important to recognize, however, that systematic reviews simply do systematically what reviewers and doctors do more haphazardly — synthesize evidence. The problem lies not with the systematic review but with the underlying evidence. The Oxford group illustrated this in its review by showing how various experts and a textbook had cited duplicate publications without recognizing that they were duplicates.

How can we respond to the problem of redundant publication? Perhaps the first response should be to cease regarding it as simply impoliteness. It is more — and may be very much more. Prevention must be next, producing and promoting codes of good practice — as, for example, the Committee on Publication Ethics (COPE) has done. Authors should be actively encouraged to send editors any other papers related to their submissions. This can be done through guidance to authors (which are famously unread) and through specific reminders at the time of submission. Reviewers may sometimes spot duplicate publication, and the best reviewers will do a search for related papers. The electronic world may make it easier to spot redundant publication, although it may also make it easier to publish redundantly — as outlets proliferate and it becomes ever easier to copy and transmit words and data.

What about punishment? Editors like other groups divide into hawks and doves. Some see redundant publication as a dreadful sin (more sometimes because it wastes their resources rather than distorts and pollutes medical evidence) and want redundant publishers punished by their employers, publicly shamed, and perhaps banned from receiving further submissions to their journal for some period. Certainly if redundant publication is detected after publication then ‘notices of redundant publication’ should appear in both journals — to alert readers of both journals to the redundancy. The figures suggest, however, that such a notice appears for perhaps one in a thousand cases. Redundant publication seems to be like speeding, so common a ‘crime’ as to be normal. Nevertheless, when I was at the BMJ if we identified a case we tended to bring it to the attention of heads of departments, deans or employers — largely to raise consciousness of the problem and its consequences.

I doubt, however, that we will make much progress while redundant publication is seen as a trivial issue. We will probably make even slower progress with a common sin of omission — simply not writing up and publishing studies. Iain Chalmers, one of the founders of the Oxford Cochrane Collaboration, has argued this is a form of misconduct, and slowly but surely he is being taken seriously (181). Again the problem is that medical evidence is biased, because ‘negative studies’ (studies that find that an intervention doesn’t work) usually are not published and the evidence is consistently biased towards making treatments seem more effective than they actually are (182, 183).

It is well established that negative studies are less likely to be published, and it’s becoming steadily clearer that this is not so much because journals reject them (as has been commonly supposed) but because authors don’t write them up and submit them. People have looked at research protocols approved by ethics committees, doctoral dissertations and abstracts presented at scientific meetings, and followed them up to see if they resulted in publications. Consistently, negative studies are less likely to be published (182-184).

These studies also show that authors are more likely to write up and submit positive studies. Now a large study from JAMA has shown that it is just as likely to publish negative as positive studies (185). This was a study in just one journal and only on particular sorts of trials, but it fits with other evidence suggesting that the problem lies more with authors than editors.

Perplexingly, academics want nothing more than to be published. So paradoxically they may not be writing up and submitting studies because they don’t think that they will be able to get them published, even though that seems not to be the case.

Pharmaceutical companies, in contrast, might prefer not to have negative studies published, and some three-quarters of trials published in four of the major general journals (Annals of Internal Medicine, JAMA, Lancet and New England Journal of Medicine) are funded by the industry (186). (Interestingly it’s only one-third of those published in the BMJ.) The industry thus has a chance to be highly influential. I think it unlikely that big companies are actively suppressing negative studies, but they may well be less energetic in encouraging their writing up and submission. Many studies have shown that published papers sponsored by pharmaceutical studies are more likely to be positive than studies they have not been sponsored (187, 188). This could be because editors are preferentially selecting positive papers by pharmaceutical companies, but this seems highly unlikely.

As many as one-half of trials reported in summary form are never published in full, and the bias introduced into medical evidence may be huge. Nobody can know the extent of this bias, but there are stories. lain Chalmers quotes the work of RJ Simes, who found that published trials showed that a combination of drugs was better than a single drug for treating patients with advanced ovarian cancer. If unpublished trials were included then the combination was no longer better (181).

Chalmers also tells how failure to publish a study on how best to look after women about to give birth to twins led to an unnecessary delay in moving to the best management (181). Obstetricians were split 50:50 on whether these mothers should be routinely admitted to rest in bed before delivery. A study conducted in Zimbabwe in 1977 showed that the practice actually led to a worse outcome for mothers and babies. But the study wasn’t published until visitors to Zambia learnt about the study years later. Once published in the Lancet the study helped to lead to a change in policy across the world (189).

One response to this failing of people to publish is to raise awareness of the problem, and this has happened to some extent. I used to think of publication bias as a small, almost technical problem, but I’ve increasingly come to think of it as a serious problem — although I’m not sure exactly how serious.

In the late 1990s around 100 journals joined together to publicize an amnesty for unpublished trials (190). We urged people who had conducted trials and never published them to register that the trial had been conducted. Anybody doing a systematic review on the subject could then contact the authors for data. Many dozens of trials were registered, but they must constitute only a tiny fraction of all unpublished trials. This was more a publicity stunt to raise awareness of the problem rather than a solution to the problem.

A much more serious response is the creation of registers of trials underway. The hope is that eventually every trial that begins anywhere will be registered. There are now many registers and a register of registers. American law requires the registration of trials, and the International Committee of Medical Journal Editors now requires trials submitted to journals that follow its guidance to include a registration number (191).

These registers should allow the identification of trials that had been started but never published — and so counteract publication bias. They will also make it easier for doctors to encourage patients with problems where the best treatment is not clear to enter trials.

Those who conduct systematic reviews are well aware of the problem of publication bias, and various statistical techniques have been developed to try and identify missing studies. To identify what is not there is clearly a difficult problem, and no technique can be foolproof. It is even more difficult to try and adjust results to compensate for the missing evidence. Many would say it can’t really be done at all.

I hope that I have convinced readers that publishing studies more than once and failing to publish are both potentially serious problems, not simply minor misdemeanours. The combination of the two — with positive results being published more than once and negative studies not being published at all — may be particularly dangerous. The result can be patients being given toxic and expensive treatments that are thought to work but which in reality don’t.

More From The Trouble With Medical Journals


References

  1. Wakefield AJ, Murch SH, Linnell AAJ et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis and pervasive developmental disorder in children. Lancet 1998;351:637-41.
  2. Laumann E, Paik A, Rosen R. Sexual dysfunction in the United States: prevalence and predictors. JAMA 1999;281:537-44 (published erratum appears in JAMA 1999;281:1174).
  3. Moynihan R. The making of a disease: female sexual dysfunction. BMJ 2003;326:45-7.
  4. Hudson A, McLellan F. Ethical issues in biomedical publication. Baltimore: Johns Hopkins University Press, 2000.
  5. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. London: Little, Brown, 1991.
  6. Haynes RB. Where’s the meat in clinical journals? ACP Journal Club 1993;119:A23-4.
  7. Altman DG. The scandal of poor medical research. BMJ 1994;308:283-4.
  8. Shaughnessy AF, Slawson DC, Bennett JH. Becoming an information master: a guidebook to the medical information jungle. J Fam Pract 1994;39:489-99.
  9. Bartrip P. Mirror of medicine: a history of the BMJ. Oxford: British Medical Journal and Oxford University Press, 1990.
  10. Chen RT, DeStefano F. Vaccine adverse events: causal or coincidental? Lancet 1998;351:611-12.
  11. Pobel D, Viel JF. Case-control study of leukaemia among young people near La Hague nuclear reprocessing plant: the environmental hypothesis revisited. BMJ 1997;314:101.
  12. Horton R. A statement by the editors of the Lancet. Lancet 2004;363:820-1.
  13. Murch SH, Anthony A, Casson DH et al. Retraction of an interpretation. Lancet 2004;363:750.
  14. Smith R. The discomfort of patient power. BMJ 2002;324:497-8.
  15. Antithrombotic Trialists’ Collaboration. Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction and stroke in high risk patients. BMJ 2002;324:71-86.
  16. Cleland JGF. For debate: Preventing atherosclerotic events with aspirin. BMJ 2002;324:103-5.
  17. Bagenal FS, Easton DF, Harris E et al. Survival of patients with breast cancer attending Bristol Cancer Help Centre. Lancet 1990;336:606-10.
  18. Fox R. Quoted in: Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  19. Richards T. Death from complementary medicine. BMJ 1990;301:510.
  20. Goodare H. The scandal of poor medical research: sloppy use of literature often to blame. BMJ 1994;308:593.
  21. Bodmer W. Bristol Cancer Help Centre. Lancet 1990;336:1188.
  22. Budd JM, Sievert ME, Schultz TR. Phenomena of retraction. Reasons for retraction and citations to the publications. JAMA 1998;280:296-7.
  23. McVie G. Quoted in: Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  24. Smith R. Charity Commission censures British cancer charities. BMJ 1994;308:155-6.
  25. Feachem RGA, Sekhri NK, White KL. Getting more for their dollar: a comparison of the NHS with California’s Kaiser Permanente. BMJ 2002;324:135-41.
  26. Himmelstein DU, Woolhandler S, David DS et al. Getting more for their dollar: Kaiser v the NHS. BMJ 2002;324:1332.
  27. Talbot-Smith A, Gnani S, Pollock A, Pereira Gray D. Questioning the claims from Kaiser. Br J Gen Pract 2004;54:415-21.
  28. Ham C, York N, Sutch S, Shaw R. Hospital bed utilisation in the NHS, Kaiser Permanente, and the US Medicare programme: analysis of routine data. BMJ 2003;327:1257-61.
  29. Sanders SA, Reinisch JM. Would you say you ‘had sex’ If…? JAMA 1999;281:275-7.
  30. Anonymous. lfs over, Debbie. JAMA 1988;259:272.
  31. Lundberg G. ‘lfs over, Debbie,’ and the euthanasia debate. JAMA 1988;259:2142-3.
  32. Smith R. Euthanasia: time for a royal commission. BMJ 1992;305:728-9.
  33. Doyal L, Doyal L. Why active euthanasia and physician assisted suicide should be legalised. BMJ 2001;323:1079-80.
  34. Emanuel EJ. Euthanasia: where The Netherlands leads will the world follow? BMJ 2001;322:1376-7.
  35. Angell M. The Supreme Court and physician-assisted suicide-the ultimate right N Eng J Med 1997;336:50-3.
  36. Marshall VM. lfs almost over — more letters on Debbie. JAMA 1988;260:787.
  37. Smith R. Cheating at medical school. BMJ 2000;321:398.
  38. Davies S. Cheating at medical school. Summary of rapid responses. BMJ 2001;322:299.
  39. Ewen SWB, Pusztai A. Effects of diets containing genetically modified potatoes expressing Galanthus nivalis lactin on rat small intestine. Lancet 1999;354:1353-4.
  40. Horton R. Genetically modified foods: ‘absurd’ concern or welcome dialogue? Lancet 1999;354:1314-15.
  41. Kuiper HA, Noteborn HPJM, Peijnenburg AACM. Adequacy of methods for testing the safety of genetically modified foods. Lancet 1999;354:1315.
  42. Bombardier C, Laine L, Reicin A et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Eng J Med 2000;343:1520-8.
  43. Curfman GD, Morrissey S, Drazen JM. Expression of concern: Bombardier et al., ‘Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis.’ N Eng J Med 2000;343:1520-8. N Eng J Med 2005;353:2813-4.
  44. Curfman GD, Morrissey S, Drazen JM. Expression of concern reaffirmed. N Eng J Med 2006;354: 1193.
  45. Laumann E, Paik A, Rosen R. Sexual dysfunction in the United States: prevalence and predictors. JAMA 1999;281:537-44 (published erratum appears in JAMA 1999;281:1174).
  46. Smith R. In search of ‘non-disease.’ BMJ 2002;324:883-5.
  47. Hughes C. BMJ admits ‘lapses’ after article wiped £30m off Scotia shares. Independent 10 June 2000.
  48. Hettiaratchy S, Clarke J, Taubel J, Besa C. Burns after photodynamic therapy. BMJ 2000;320:1245.
  49. Bryce R. Burns after photodynamic therapy. Drug point gives misleading impression of incidence of burns with temoporfin (Foscan). BMJ 2000;320:1731.
  50. Richmond C. David Horrobin. BMJ 2003;326:885.
  51. Enstrom JE, Kabat GC. Environmental tobacco smoke and tobacco related mortality in a prospective study of Californians, 1960-98. BMJ 2003;326:1057-60.
  52. Roberts J, Smith R. Publishing research supported by the tobacco industry. BMJ 1996;312:133-4.
  53. Lefanu WR. British periodicals of medicine 1640-1899. London: Wellcome Unit for the History of Medicine, 1984.
  54. Squire Sprigge S. The life and times of Thomas Wakley. London: Longmans, 1897.
  55. Bartrip PWJ. Themselves writ large: the BMA 183~1966. London: BMJ Books, 1996.
  56. Delamothe T. How political should a general medical journal be? BMJ 2002;325:1431-2.
  57. Gedalia A. Political motivation of a medical joumal [electronic response to Halileh and Hartling. Israeli-Palestinian conflict]. BMJ 2002. http://bmj.com/cgi/eletters/324173331361#20289 (accessed 10 Dec 2002).
  58. Marchetti P. How political should a general medical journal be? Medical journal is no place for politics. BMJ 2003;326:1431-32.
  59. Roberts I. The second gasoline war and how we can prevent the third. BMJ 2003;326:171.
  60. Roberts IG. How political should a general medical journal be? Medical journals may have had role in justifying war. BMJ 2003;326:820.
  61. Institute of Medicine. Crossing the quality chasm. Anew health system for the 21st century. Washington: National Academy Press, 2001.
  62. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J 1995;153:1423-31.
  63. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet 1993;342:1317-22.
  64. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.
  65. Smith R. What clinical information do doctors need? BMJ 1996;313:1062-8.
  66. Godlee F, Smith A, Goldman D. Clinical evidence. BMJ 1999;318:1570-1.
  67. Smith R. The BMJ: moving on. BMJ 2002;324:5-6.
  68. Milton J. Aeropagitica. World Wide Web: Amazon Press (digital download), 2003.
  69. Coulter A. The autonomous patient ending paternalism in medical care. London: Stationery Office Books, 2002.
  70. Muir Gray JA. The resourceful patient. Oxford: Rosetta Press, 2001.
  71. World Health Organization. Macroeconomics and health: investing in health for economic development. Report of the commission on macroeconomics and health. Geneva: WHO, 2001.
  72. Mullner M, Groves T. Making research papers in the BMJ more accessible. BMJ 2002;325:456.
  73. Godlee F, Jefferson T, eds. Peer review in health sciences, 2nd edn. London: BMJ Books, 2003.
  74. Reiman AS. Dealing with conflicts of interest. N Eng J Med 1984;310:1182-3.
  75. Hall D. Child protection: lessons from Victoria Climbié. BMJ 2003;326:293-4.
  76. McCombs ME, Shaw DL. The agenda setting function of mass media. Public Opin Q 1972;36:176-87.
  77. McCombs ME, Shaw DL. The evolution of agenda-setting research: twenty five years in the marketplace of ideas. J Commun 1993;43:58-67.
  78. Edelstein L. The Hippocratic oath: text, translation, and interpretation. Baltimore: Johns Hopkins Press, 1943.
  79. www.pbs.org/wgbhlnova/doctors/oath_modem.html (accessed 8 June 2003).
  80. Weatherall DJ. The inhumanity of medicine. BMJ 1994;309:1671-2.
  81. Smith R. Publishing information about patients. BMJ 1995;311:1240-1.
  82. Smith R. Informed consent: edging forwards (and backwards). BMJ 1998;316:949-51 .
  83. Calman K. The profession of medicine. BMJ 1994;309:1140-3.
  84. Smith R. Medicine’s core values. BMJ 1994;309:1247-8.
  85. Smith R. Misconduct in research: editors respond. BMJ 1997;315:201-2.
  86. McCall Smith A, Tonks A, Smith R. An ethics committee for the BMJBMJ 2000;321:720.
  87. Smith R. Medical editor lambasts journals and editors. BMJ 2001;323:651.
  88. Smith R, Rennie D. And now, evidence based editing. BMJ 1995;311:826.
  89. Weeks WB, Wallace AE. Readability of British and American medical prose at the start of the 21st century. BMJ 2002;325:1451-2.
  90. O’Donnell M. Evidence-based illiteracy: time to rescue ‘the literature’. Lancet 2000;355:489-91 .
  91. O’Donnell M. The toxic effect of language on medicine. J R Coli Physicians Lond 1995;29:525-9.
  92. Berwick D, Davidoff F, Hiatt H, Smith R. Refining and implementing the Tavistock principles for everybody in health care. BMJ 2001;323:616-20.
  93. Gaylin W. Faulty diagnosis. Why Clinton’s health-care plan won’t cure what ails us. Harpers 1993;October:57-64.
  94. Davidoff F. Reinecke RD. The 28th Amendment. Ann Intern Med 1999;130:692-4.
  95. Davies S. Obituary for David Horrobin: summary of rapid responses. BMJ 2003;326: 1089.
  96. Butler D. Medical journal under attack as dissenters seize AIDS platform. Nature 2003;426:215.
  97. Smith R. Milton and Galileo would back BMJ on free speech. Nature 2004;427:287.
  98. Carr EH. What is histoty? Harmondsworth: Penguin, 1990.
  99. Popper K. The logic of scientific discovery. London: Routledge, 2002.
  100. Kuhn T. The structure of scientific revolutions. London: Routledge, 1996.
  101. www.guardian.co.uklnewsroomlstory/0,11718,850815,00.html (accessed 14 June 2003).
  102. Davies S, Delamothe T. Revitalising rapid responses. BMJ 2005;330:1284.
  103. Morton V, Torgerson DJ. Effect of regression to the mean on decision making in health care. BMJ 2003;326:1 083-4.
  104. Horton R. Surgical research or comic opera: questions, but few answers. Lancet 1996;347:984-5.
  105. Pitches D, Burls A, Fry-Smith A. How to make a silk purse from a sow’s ear — a comprehensive review of strategies to optimise data for corrupt managers and incompetent clinicians. BMJ 2003;327:1436-9.
  106. Poloniecki J. Half of all doctors are below average. BMJ 1998;316:1734-6.
  107. Writing group for the Women’s Health Initiative Investigators. Risks and benefits of estrogen plus progestin in healthy postmenopausal women. JAMA 2002;288:321-33.
  108. Shumaker SA, Legault C, Thai L et al. Estrogen plus progestin and the incidence of dementia and mild cognitive impairment in postmenopausal women: the Women’s Health Initiative Memory Study: a randomized controlled trial. JAMA 2003;289:2651-62.
  109. Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Stat Med 1984;3:409-22.
  110. Leibovici L. Effects of remote, retroactive intercessory prayer on outcomes in patients with bloodstream infection: randomised controlled trial. BMJ 2001;323:1450-1.
  111. Haynes RB, McKibbon A, Kanani R. Systematic review of randomised trials of interventions to assist patients to follow prescriptions for medications. Lancet 1996;348:383-6.
  112. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408-12.
  113. Altman DG, Schulz KF, Moher D et al., for the CONSORT Group. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 2001;134:663-94.
  114. Moher D, Jones A, Lepage L; CONSORT Group (Consolitdated Standards for Reporting of Trials). Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001;285:1992-5.
  115. Garattini S, Bertele V, Li Bassi L. How can research ethics committees protect patients better? BMJ 2003;326:1199-201.
  116. Sackett Dl, Oxman AD. HARLOT pic: an amalgamation of the world’s two oldest professions. BMJ 2003;327:1442-5.
  117. loannidis JPA. Why most published research findings are false. PLoS Med 2005;2:e124.
  118. Greenhalgh T. How to read a paper. London: BMJ Books, 1997.
  119. Sterne JAC, Davey Smith G. Sifting the evidence: what’s wrong with significance tests? BMJ 2001;322:226-31.
  120. Le Fanu J. The rise and fall of modem medicine. New York: Little, Brown, 1999.
  121. Lock S. A difficult balance: editorial peer review in medicine. London: Nuffield Provincials Hospital Trust, 1985.
  122. Rennie D. Guarding the guardians: a conference on editorial peer review. JAMA 1986;256:2391-2.
  123. Martyn C. Slow tracking for BMJ papers. BMJ 2005;331:1551-2.
  124. Hwang WS, Roh Sl, Lee BC et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts. Science 2005;308:1777-83.
  125. Normile D, Vogel G, Holden C. Stem cells: cloning researcher says work is flawed but claims results stand. Science 2005;310:1886-7.
  126. Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. JAMA 2002;287:2784-6.
  127. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 1998;280:237-40.
  128. Schroter S, Black N, Evans S et al. Effects of training on quality of peer review: randomised controlled trial. BMJ 2004;328:673.
  129. Peters D, Ceci S. Peer-review practices of psychological journals: the fate of submitted articles, submitted again. Behav Brain Sci 1982;5:187-255.
  130. McIntyre N, Popper K. The critical attitude in medicine: the need for a new ethics. BMJ 1983;287:1919-23.
  131. Horton R. Pardonable revisions and protocol reviews. Lancet 1997;349:6.
  132. Rennie D. Misconduct and journal peer review. In: Godlee F, Jefferson T, eds. Peer review in health sciences. London: BMJ Books, 1999.
  133. McNutt RA, Evans AT, Fletcher AH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. JAMA 1990;263:1371-6.
  134. Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D, the PEER investigators. Does masking author identity improve peer review quality: a randomized controlled trial. JAMA 1998;280:240-2.
  135. van Rooyen S, Godlee F, Evans S et al. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA 1998;280:234-7.
  136. Fabiato A. Anonymity of reviewers. Cardiovasc Res 1994;28:1134-9.
  137. Fletcher RH, Fletcher SW, Fox R et al. Anonymity of reviewers. Cardiovasc Res 1994;28:1340-5.
  138. van Rooyen S, Godlee F, Evans S et al. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ 1999;18:23-7.
  139. Lock S. Research misconduct 1974-1990: an imperfect history. In: Lock S, Wells F, Farthing M, eds. Fraud and misconduct in biomedical research, 3rd edn. London: BMJ Books, 2001.
  140. Rennie D, Gunsalus CK. Regulations on scientific misconduct: lessons from the US experience. In: Lock S, Wells F, Farthing M, eds. Fraud and misconduct in biomedical research, 3rd edn. London: BMJ Books, 2001.
  141. Royal College of Obstetricians and Gynaecologists. Report of the independent committee of inquiry into the circumstances surrounding the publication of two articles in the British Journal of Obstetrics and Gynaecology in August 1994. London: RCOG, 1995.
  142. Lock S. Lessons from the Pearce affair: handling scientific fraud. BMJ 1995;310:1547.
  143. Pearce JM, Manyonda IT, Chamberlain GVP. Tenn delivery after intrauterine relocation of an ectopicpregnancy. Br J Obstet Gynaecol 1994;101:716-17.
  144. Pearce JM, Hamid RI. Randomised controlled trial of the use of human chorionic gonadotrophin in recurrent miscarriage associated with polycystic ovaries. Br J Obstet Gynaecol 1994;101:685-8.
  145. Wilmshurst P. Institutional corruption in medicine. BMJ 2002;325:1232-5.
  146. Smith R. What is research misconduct? In: Nimmo WS, ed. Joint Consensus Conference on Research Misconduct in Biomedical Research. J R Coli Phys Edin 2000;30 (Suppl 7): 4-8.
  147. Integrity and misconduct in research. Report of the Commission on Research Integrity to the Secretary of Health and Human Services, the House Committee on Commerce, and the Senate Committee on Labor and Human resources. 3 November 1995. gopher.faseb.org/opar/cri.html (accessed 10 July 2003).
  148. Office of Science and Technology Policy, Executive office of the President. Federal policy on research misconduct. Federal Register 6 December 2000, pp 76260-4. frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=2000_register&docid=00-30852-filed (accessed 10 July 2003).
  149. Nylenna M, Andersen D, Dahlquist G et al. on behalf of the National Committees on Scientific Dishonesty in the Nordic Countries. Handling of scientific dishonesty in the Nordic countries. Lancet 1999;354:57-61.
  150. Joint Consensus Conference on Misconduct in Biomedical Research. Consensus statement. 28 and 29 October 1999. www.rcpe.ac.uk/esd/consensuslmisconduct_99.html (accessed 10 July 2003).
  151. Zuckerman H. Scientific elite: Nobel laureates in the United States. New York: Free Press, 1977.
  152. Rennie SC, Crosby JR. Are ‘tomorrow’s doctors’ honest? Questionnaire study exploring medical students’ attitudes and reported behaviour on academic misconduct. BMJ 2001;322:274-5.
  153. Lock S. Misconduct In medical research: does it exist In Britain? BMJ 1988;297:1531-5.
  154. Smith R. Draft code of conduct for medical editors. BMJ 2003;327:1010.
  155. Stoa-Birketvedt G. Effect of cimetidine suspension on appetite and weight in overweight subjects. BMJ 1993;306:1091-3.
  156. Rasmussen MH, Andersen T, Breum L et al. Cimetidine suspension as adjuvant to energy restricted diet in treating obesity. BMJ 1993;306:1093-6.
  157. Garrow J. Does cimetidine cause weight loss? BMJ 1993;306:1084.
  158. White C. Suspected research fraud: difficulties of getting at the truth. BMJ 2005;331:281-8.
  159. Smith R. Investigating the other studies of a possibly fraudulent author. BMJ 2005;331 :288-91.
  160. Chandra RK. Effect of vitamin and trace-element supplementation on cognitive function in elderly subjects. Nutrition 2001;17:709-12.
  161. Chandra RK. Effect of vitamin and trace-element supplementation on immune responses and infection in elderly subjects. Lancet 1992;340:1124-7.
  162. Meguid M. Retraction of: Chandra RK. Nutrition 2001;17:709-12. Nutrition 2005;21:286.
  163. Carpenter RK, Roberts S, Sternberg S. Nutrition and immune function: a 1992 report. Lancet 2003;361:2247.
  164. Shapiro OW, Wenger WS, Shapiro MF. The contributions of authors to multiauthored biomedical research papers. JAMA 1994;271:438-42.
  165. Goodman N. Survey of fulfilment of criteria of authorship in published medical research. BMJ 1994;309:1482.
  166. Flanagin A, Carey LA, Fontanarosa PB et al. Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. JAMA 1998;280:222-4.
  167. Horton R. The signature of responsibility. Lancet 1997;350:5-6.
  168. International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical journals: writing and editing for biomedical publication. www.icmje.org/ (accessed 15 April 2006).
  169. Bhopal R, Rankin J, McColl E et al. The vexed question of authorship: views of researchers in a British medical faculty. BMJ 1997;314:1009.
  170. Wilcox LJ. Authorship. The coin of the realm. The source of complaints. JAMA 1998;280:216-17.
  171. Eysenbach G. Medical students and scientific misconduct: survey among 229 students. www.bmj.com/cgi/eletters/322/7281/274#12443, 3 February 2001.
  172. Rennie D, Yank V, Emanuel L. When authorship fails: a proposal to make contributors accountable. JAMA 1997;278:579-85.
  173. Horton R. The hidden research paper. JAMA 2002;287:2775-8.
  174. MAST-I Group. Randomised controlled trial of streptokinase, aspirin, and combination of both in treatment of acute ischaemic stroke. Lancet 1995;346:1509-14.
  175. Tognoni G, Roncaglioni MC. Dissent: an alternative interpretation of MAST-I. Lancet 1995;346:1515.
  176. Docherty M, Smith R. The case for structuring the discussion of scientific papers. BMJ 1999;318:1224-5.
  177. Gotzsche PC. Multiple publication of reports of drug trials. Eur J Clin Pharmacol 1989;36:429-32.
  178. Waldron T. ls duplicate publishing on the increase? BMJ 1992;304:1029.
  179. Tramer MR. Reynolds DJM, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: a case study. BMJ 1997;315:635-40.
  180. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)ased medicine — selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 2003;326:1171-3.
  181. Chalmers I. Underreporting research is scientific misconduct. JAMA 1990;263:1405-8.
  182. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990;263:1385-9.
  183. Dickersin K, Min Yi. Publication bias: the problem that won’t go away. Ann N Y Acad Sci 1993;703:135-46; discussion 146-8.
  184. Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997;315:629-34.
  185. Olson CM, Rennie D, Cook D et al. Publication bias in editorial decision making. JAMA 2002;287:2825-8.
  186. Egger M, Bartlett C, Juni P. Are randomised controlled trials in the BMJ different? BMJ 2001;323:1253.
  187. Lexchin J, Bero LA, Djulbegovic 8, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1167-70.
  188. Kjaergard LL, Als-Nielsen B. Association between competing interests and authors’ conclusions: epidemiological study of randomised clinical trials published in the BMJ. BMJ 2002;325:249.
  189. Saunders MC, Dick JS, Brown IM et al. The effects of hospital admission for bed rest on duration of twin pregnancy: a randomised trial. Lancet 1985;11:793-5.
  190. Smith R, Roberts I. An amnesty for unpublished trials. BMJ 1997;315:622.
  191. De Angelis C, Drazen JM, Frizelle FA et al. Is this clinical trial fully registered? A statement from the International Committee of Medical Journal Editors.

Comments on Publishing Too Much and Nothing: Serious Problems Not Just Nuisances

Comments

Comment thread URL copied!