Prathap Tharyan
Mens Sana Monogr. 2012 Jan-Dec; 10(1): 158–180. doi: 10.4103/0973-1229.91426
PMCID:PMC3353596
Abstract
Scientific research aims to use reliable methods to produce generalizable new knowledge in order to understand the human condition and maximize human potential. The sanctity accorded to scientific research has been violated by numerous instances of research fraud, as well as deceptive and conflicted research that have seriously harmed people, subverted the evidence-base, wasted valuable resources, and undermined public trust. This deception by individuals has been fostered by the unrealistic expectations of society; facilitated by the complicity of institutions and organisations; and sanctioned by the inaction of supposed gate-keepers. Re-defining misconduct as occurring on a continuum from irresponsible to fraudulent is the first step in confronting this inconvenient truth. Implementing and evaluating multiple strategies targeting systems and individuals that promote the responsible conduct of research, rather than merely exposing serious instances of misconduct by individuals, is urgently required to restore faith in the aspirations, integrity, and results of scientific research.
Keywords: Fabrication, Falsification, Plagiarism, Research misconduct, Retraction, Scientific fraud, Scientific misconduct
Introduction
Scientific research in the health sciences is an endeavor that aims to use rigorous, objective, and explicit methods that limit the effects of bias, confounding and chance in producing reliable and generalisable new knowledge, in order to understand the human condition, and maximize human potential. The essence of scientific research, and scientific discovery, is the search for truth; not merely an adherence to methods or theories but the use of these, as well as serendipitous discoveries, or explorations of alternative methods or theories. The scientific method has its roots in philosophy and the attempts to use deductive and inductive reasoning to reject superstitious explanations in understanding natural phenomena. Science has therefore been accorded a sanctity that previously was the privilege of religion and philosophy, and scientists have traditionally been regarded as seekers of the truth.
The success of the scientific endeavor is largely based on trust in the competence and integrity of researchers, and all involved in its production, oversight and dissemination. Numerous, high-profile instances of scientific fraud and deceit in international health research have tarnished the image of scientists as purveyors of the truth; the scientific method as unassailable; and the mechanisms of governance of scientific research as impeccable. There is also a growing realization that research (or scientific) misconduct forms but one of the ways that research evidence can deceive and lead to poor health outcomes when used to guide healthcare and health policy. The sobering reality is that the current research agenda is tainted by academic and financial conflicts that permeate the very foundations of what is considered research, what research should be done and funded, how it should be conducted and reported; and that condones the many ways research is used to deceive (Tharyan, 2011[37]).
The responsibility for this sorry state of affairs lies in: a) the unrealistic expectations of society; b) the ambitions of institutions and individuals; c) the irresponsible behaviour, not only of researchers, but also the institutions that support research, and the supposed mentors of researchers; d) the inaction, or complicity, of funders of research; e) the ineptitude of those that approve and review research and all those involved in research governance; and f) the conflicts of interest of the journals that publish research.
Scientific misconduct that is reported and proven forms but the tip of the proverbial iceberg. This paper attempts to draw lessons from high-profile cases of exposed instances of scientific misconduct as well as from empirical research into misconduct and its impact, in order to inform approaches aimed at purging science of misconduct.
Scientific Truth Under Siege: Lessons Learned
Allegations of research misconduct have involved some of the most revered names in scientific research. Scientific legends accused of falsification include Gregor Mendel, who was accused by the statistician, R.A. Fischer, of falsifying data on his experiments with peas; Isaac Newton, accused of falsifying data to fit his hypotheses; and Louis Pasteur, accused of claiming a competitor’s vaccine as his own (Hamblin, 1981[14]). Famous scientists proven of fraud include Paul Kammerer, a supporter of Lamarckism, who, in 1900, was found to have painted with India ink, the nuptial pads he claimed that midwife toads developed as an acquired inherited trait after being induced to copulate in water across many generations. Ernst Haeckel (former student of Robert Virchow) was found to have doctored research illustrations to support his, discredited, theory that an embryo retraces its evolutionary path in utero (Hamblin, 1981[14]). The elaborate deceit regarding the discovery of the Piltdown man in 1912 and subsequent expose in 1953 is another instance of scientific fraud that rocked the world (Hamblin, 1981[14]).
There is no statute of limitations on reporting misconduct. Sir Cyril Burt, the respected English educational psychologist, was exposed after his death in 1971 to have falsified research data in some studies on the heritability of IQ. It was shown that correlation coefficients of monozygotic and dizygotic twins’ IQ scores were identical across articles, even when new data was added, and two of his supposed collaborators did not exist (Hamblin, 1981[14]).
In 1986, Robert Gallo (Director, Institute of Human Virology, University of Maryland School of Medicine, Baltimore) received a second Lasker award (American equivalent of the Nobel Prize), for pioneering work describing the role of the retrovirus (now known as HIV-1) as the causative agent of the Acquired Immune Deficiency Syndrome (AIDS). However, the 2008 Nobel Prize in Physiology or Medicine was awarded to Luc Montagnier and Francoise Barre-Sinoussi (Institut Pasteur, France) for the discovery of HIV, citing their 1983 Science paper describing a retrovirus they called LAV (lymphadenopathy-associated virus), isolated from a patient at risk for AIDS, though the paper did not conclude definitely that LAV caused AIDS. The controversies surrounded Gallo’s claim in Science, in 1984, to have discovered a virus, HTLV-III as the cause of AIDS, led to his omission from the Nobel Prize. In December 1992, the Office of Research Integrity (ORI) declared that Gallo had deliberately misled in publications, and a patent application, in claiming to have discovered HTLV-III. Despite Gallo’s protests, genetic sequencing showed the virus grown on cell lines in his laboratory was the same as samples of French LAV obtained by Gallo from Montagnier. Gallo’s cell lines claimed as his own were also shown to be actually from Dr. Adi Gazdar, from another NIH lab (Dingell, 1993[6]).
These are examples of beliefs in erroneous scientific theories driving fraudulent claims of scientific discoveries in order to further academic ambitions, or to discredit the work of other scientists.
Misconduct of various kinds is widespread and occurs in every country where research is conducted. Instances of high-profile research fraud and evaluation of papers retracted in PubMed reveal that of the three forms of research misconduct, falsification of data (manipulating research materials, equipment, images, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record) is by far the most common. These are referred to merely as “questionable research practices” (QRS) by some (Fanelli, 2009[7]). Data plagiarism and outright fabrication occur less often than falsification, though these are practices that receive the most publicity, when exposed.
Scientific misconduct is more common than acknowledged or reported. Surveys of papers retracted for fraud appear to be increasing over the past decade, as are instances of misconduct reported to the US ORI; indicating either a genuine increase, or greater vigilance and reporting. On average, 2% of scientists in surveys admit to having conducted serious misconduct themselves, and up to 30% admit to “questionable research practices” (QRS) (that actually amount to falsification). On the other hand, on average 14% of scientists admit to knowing of colleague’s involvement in serious misconduct, while up to 72% admit knowledge of the occurrence of QRS in colleagues; between 30% and 50% of these instances were never reported (Fanelli, 2009[7]). This indicates that reported cases of research misconduct constitute only a fraction of what appears to be a widespread, often condoned or unreported, practice.
The majority of papers retracted for fabrication or falsification were by previous offenders. In 1980, John Long, assistant professor of pathology, Harvard Medical School, resigned after faked results of the molecular weight of immune complexes were detected in a research submission. His previous work establishing the first long-term cultures of cells from Hodgkin’s disease, providing strong evidence it was a tumour of macrophages, was subsequently shown to be from the North Colombian brown foot owl monkey, and not from human cells (Hamblin, 1981[14]).
In the same year, Vijay Soman, diabetes researcher at Yale University, was found guilty of fabrication, falsification and plagiarism and returned to India. He not only plagiarised text, formulae and results from a rejected manuscript reviewed by co-author, Philip Felig, but also fabricated large portions of the paper. Ten other manuscripts co-authored by Soman were retracted for fabrication and falsification (Altman and Melcher, 1983[2]).
A year later, John Darsee, Cardiology trainee at Harvard University, and protégé of Eugene Braunwald, was investigated by the NIH, and multiple instances of fabrication and falsification were confirmed. Previous publications spanning his career at the University of Notre Dame, Emory University and Harvard from 1966 to 1983 were also investigated, and over 100 fraudulent papers were retracted (Dingell, 1993[6]; Lafollette, 2000[18]; Altman and Melcher, 1983[2]; Smith, 2006[34]).
In 1983, Stephen Breuning, Professor at the University of Pittsburgh and an expert in mental retardation research, was investigated by the NIH. Eventually over 50 publications were declared fraudulent, and many, but not all, were retracted (Lock, 1988[20]; Lafollette, 2000[18]; Dingell, 1993[6]; Korpela, 2010[16]).
In 1985, Robert Slutsky (associate clinical professor, Department of Radiology, University of California, San Diego) resigned over fabricated data. Of 137 papers published over seven years (1978 -1985), often at a rate of one paper every 10 days, 12 were judged definitely fraudulent, and 48 were judged questionable; many publications, though not all, were subsequently retracted (Lock, 1988[20]; Smith 2006[34]).
In 1994 , Malcolm Pearce (Assistant editor, British Journal of Obstetrics and Gynaecology; senior lecturer St George’s Hospital Medical School) published two papers in the BJOG; one was a case report of a patient who delivered a baby after successful reimplanting of an ectopic pregnancy that received worldwide media coverage. The patient was found to be nonexistent. The other publication reported the results of a trial of treatment of recurrent miscarriage in 200 women with polycystic ovary syndrome, where the numbers of women recruited were found falsified. Four other papers by Pearce, two in BMJ, were also retracted as fraudulent, after investigations (Lock, 1995[21]; Smith, 2006[34]).
In 2003, the University of Vermont notified the editors of the Annals of Internal Medicine that a 1995 publication by Eric Poehlman, a former faculty member, on the results of energy expenditure after menopause, was one of three publications found fraudulent in internal University investigations. Subsequently, 10 other publications were retracted after investigations, leaving in doubt the veracity of 195 other papers indexed in PubMed as of 2005 (Sox and Rennie, 2006[31]).
Systematic surveys also reveal that fraud is usually not an isolated instance, and authors of fraudulent papers often have previous retractions for fraud. These papers are often reported in high-impact journals, often from single institutions, and usually have many co-authors, with varying degrees of complicity in the perpetration of misconduct (Steen, 2010[35]).
Research misconduct often begins early in a scientist’s career. In 2006, an enquiry commission determined that the 2005 publication in The Lancet by Jon Sudbø (Dentist, consultant oncologist, and former medical researcher at The Radium Hospital, Oslo; and associate professor, University of Oslo, Norway) concluding that nonsteroidal anti-inflammatory drugs (NASIDs) like ibuprofen diminish the risk of oral cancer in smokers, was based on 908 entirely fictitious patients recruited from a cancer patient database which had not yet opened. The enquiry commission subsequently deemed that 15 of 38 articles Sudbø had published since 1993 were fraudulent, including his doctoral dissertation (Nylenna and Horton, 2006[28]; Slesser and Qureshi, 2009[30]).
Apart from Sudbø, Darsee, Long, Pearce, Slutsky and Soman, other scientists guilty of misconduct early in their career include Robert Gullis, a postdoctoral biochemist from Germany, who in 1977, admitted to Nature that his published work on the concentration of cyclic guanosine monophosphate in neuroblastoma cells was not based on experiments but figments of his imagination, driven by strong convictions of his ideas (Hamblin, 1981[14]; Lock, 1988[20]). Amitav Hajra, a promising graduate student working at the University of Michigan, and later at the National Human Genome Research Institute, had five papers published in 1995 and 1996 about a possible genetic cause of leukemia retracted by his supervisor, Francis Collins, when Hajra admitted to having fabricated the results (Bonettta, 2006[3]).
The majority of reported, and unreported but suspected, cases of misconduct involves junior and mid-level researchers, and it is likely that the more successful they are at getting away with misconduct, the more likely they will repeat their offence, until fabricated or falsified high profile research in high-impact journals lead to exposure of their deceit.
Many co-authors who claim ignorance of fraud are guilty of complicity by accepting “gift authorship.” Some scientists such as Robert Slutsky and Sir Cyril Burt were guilty of inventing the names of several co-authors. Others like Scott Reuben ( former Professor of Anesthesiology and Pain Medicine, Tufts University, Boston; and chief of acute pain, Baystate Medical Center, Springfield, Massachusetts) forged the signatures of supposed co-authors over 13 years in 21 papers reporting results of clinical trials that were never actually conducted (Marcovitch, 2011[24]).
However, in other instances, the complicity of co-authors was clearly evident. Philp Felig, co-author of Vijay Soman’s retracted paper, resigned as Chair of Medicine at Columbia. Though exonerated of direct involvement, he was found guilty of accepting “gift authorship” without verifying the integrity of published data (Altman and Melcher, 1983[2]).
A paper by Thereza Imanishi-Kari, a researcher at Tufts University), and co-authored by her supervisor, David Baltimore (Nobel Laureate, credited to have discovered the enzyme reverse transcriptase) was retracted for falsification after congressional hearings, and formal indictment by the NIH ORI. The charge of gift authorship led to the resignation of David Baltimore as President of Rockefeller University in 1991 (Dingell, 1993[6]; Lafollette, 2000[18]).
While co-authors may claim ignorance of falsified or fraudulent data, the Committee on Publication Ethics (COPE) holds them responsible, if they affix their names to publications.
Jon Sudbø’s numerous co-authors were exonerated of complicity but cautioned about gift authorship (Slesser and Qureshi, 2009[30]). Malcolm Pearce’s co-author, Geoffrey Chamberlain, editor of the BJOG and president of the Royal College of Obstetrics and Gynaecology, had to resign editorship and the president’s post after being found guilty of accepting gift authorship (Lock, 1995[21]; Smith, 2006[34]). Gerald Schatten, a co-author of the fraudulent 2005 Science paper by Korean stem cell researcher, Woo Suk Hwang, was found guilty by the University of Pittsburgh of misconduct by accepting gift authorship without verifying the data in the publication, and with insufficient involvement in the study (Bonettta, 2006[3]).
Many authors of fraudulent papers had been involved in other fraudulent or unethical activities. In 2005, Seoul National University, declared that the 2004 and 2005 papers in Science by South Korean stem cell researcher Woo Suk Hwang and his co-workers were fabricated. Hwang’s publications claiming the first stem cell line produced from a cloned human embryo, and the creation of 11 stem cell lines that genetically matched people with spinal cord injury, diabetes, and an immune system disorder using only 185 eggs, were hailed at publication as seminal scientific breakthroughs in stem cell research. Investigations revealed that nine out of 11 stem cell lines were faked, and the remaining two were doubtful. Hwang had also used far more than the 185 eggs claimed in his paper. Co-workers admitted to duplicating photographs of the cell lines; and many co-authors never actually saw evidence of these cell lines. Other revelations were that the eggs came from paid donors, an illegal practice in South Korea, and from female research staff, an unethical practice anywhere (Bonettta, 2006[3]; Lancet, 2006[19]; Slesser and Qureshi, 2009[30]).
Robert Gallo had previously been investigated for not reporting the death of two participants in an AIDS vaccine trial in a Lancet publication, nor had he reported them to the NIH; Gallo claimed error and insufficient knowledge of procedure. He was also involved in investigations of financial impropriety involving two of his laboratory staff, though ultimately cleared of complicity (Dingell, 1993[6]).
In January 2010, Andrew Wakefield, (former surgeon and researcher, Royal Free Hospital and Medical School, London) was indicted by the UK GMC for dishonesty and abuse of developmentally challenged children in research published in a Lancet 1998 case series with 12 co-authors . This controversial paper linked measles, mumps, and rubella (MMR) vaccine to the development of “autistic enterocolitis”. Investigations by an independent journalist revealed that the pathology specimens were doctored and none of the children actually developed enterocolitis; data had been falsified to show a temporal link between MMR vaccination and behavioral problems. However, it was also shown that children were subjected to unethical invasive investigations; no approval was granted by institutional research committees and Wakefield had undisclosed conflicts of interest related to a planned lawsuit against manufacturers of the MMR vaccine (Godlee et al., 2011[11]).
In November 2010, Anil Potti (Associate Professor, Department of Medicine and the Institute for Genome Sciences and Policy, Duke University, Durham, North Carolina) resigned after allegations of scientific misconduct led to large-scale investigations. What led to the investigations was an article published in the press in July 2010 where he was accused of falsely claiming several academic awards, including the Rhodes scholarship, in resumes and applications, including the grant awarded from the American Cancer Society for $729,000. From 2006, biostatisticians, Keith Baggerly and Kevin Coombes from the MD Anderson Cancer Centre, Houston, had made many failed attempts to prove misconduct after they were unable to repeat (after 2000 hours of effort) experiments in the 2006 Nature Medicine and NEJM publications by Potti and supervisor, Joseph Nevins (director for the Center for Applied Genomics and Technology) on “personalised medicine.” Potti and Nevins claimed that they could predict response to chemotherapy in individual patients with lung, breast, and ovarian cancer, using their techniques of gene expression arrays in cultures of cancer cells. Baggerly and Coombes detected several errors, incomplete data, falsifications and fabrications, and claimed insufficient responses and data from the Duke team and from the journals. However, the tenor of the investigations changed altogether after the July 2010 media report regarding the Potti’s false claim of the Rhodes scholarship (The Economist, 2010[38]).
Exposing research misconduct is difficult. People who witness or suspect research fraud are reluctant to report their suspicions due to fear of retaliation, ridicule, or accusations of complicity. In the cases of Darsee, Long, Pearce, Slutsky, Imanishi-Kari, and Hwang, co-workers raised suspicions of fraud.
Journal peer-reviewers were responsible in identifying as fraudulent the 2001 paper in Nutrition by Ranjit Kumar Chandra (Canadian researcher; now in India) showing that elderly people randomized to physiological amounts of vitamins and trace elements had improved cognitive functions compared to those given a placebo. It had earlier been rejected by BMJ in peer review due to doubts about fabrication. In 2005, it was declared fraudulent and retracted (Smith, 2005[32]).
Concerns during peer review also led eventually to the “expression of concern” published in the July 2005 issue of BMJ about the 1992 BMJ paper by Ram B. Singh, private practitioner from Moradabad, India, and co-authors, showing that patients randomized to take a low-fat, fiber-rich diet had nearly half the risk of dying from any cause over a year’s follow-up, compared to those on a reduced fat diet alone (Smith and Godlee, 2005[33]).
Trial by media is also resorted to, as occurred in 2006 when the Canadian Broadcasting Corporation aired serious doubts about several of Ranjit Chandra’s previous publications, including a study in 2003 in Lancet that had been cited more than 300 times (Smith, 2005[32]).
It took the assiduous work of an investigative journalist appointed by the BMJ to unravel the complicity inherent in the work of Ram B. Singh and co-workers, and document the difficulties in finding an agency in India to adjudicate suspicions of misconduct (White, 2005[40]).
Andrew Wakefield was also exposed by the efforts of another investigative journalist (Godlee et al., 2011[11]). In other instances, readers, funders of research, and regulators, have played their part in exposing fraudulent research. Scott Reuben was exposed by a routine audit that raised suspicions of fabrication (Marcovitch, 2011[24]).
Individual “whistle-blowers” often face considerable harassment and ridicule by their institutions, and even the media, and often have to resign, or seek employment elsewhere. Robert Sprague, former mentor of Stephen Breuning, alleged fabrication and faced several reprisals during the NIH investigation that dragged for 3 years (Lafollette, 2000[18]). Margot O’ Toole, co-worker of Imanishi-Kari, first alleged misconduct; she was vilified by senior administrators throughout the lengthy enquiry, leading her to eventually resign and quit research altogether (Lafollette, 2000[18]).
Proving misconduct is also difficult. The Committee of Publication Ethics (COPE) considers it the obligation of journal editors to follow up on allegations of misconduct, and provides guidance on methods to follow (COPE, 2011[5]); but journals have limited resources to investigate; have limited jurisdiction to take action other than publish an “expression of concern;” and require national organisations or the host institution to prove misconduct and request that the fraudulent article be retracted.
The former editor of BMJ, Richard Smith, attempted for more than 10 years to investigate and publicly expose the paper by Ram B. Singh as fraudulent. Failing to find a legitimate authority in India to adjudicate, the BMJ commissioned and published in 2005 the results of a forensic-statistical investigation strongly indicating data fabrication (Al-Marzouki et al., 2005[1]; Smith and Godlee, 2005[33]; White, 2005[40]).
Similarly, in July 2005, Richard Horton, the Lancet editor, published an “expression of concern” and detailed the painstaking investigations and findings that indicted Dr. Singh of fabricating a previous Lancet paper (Horton, 2005[15]). These investigations reveal that researchers often do not respond to journal enquires, or claim to have lost data, deliberately thwart investigations, initiate legal proceedings, or provide insufficient information to conclude investigations.
Institutions often defend their employees, at least initially, as was seen in the cases involving Imanishi- Kari, Gallo, Hwang, and Wakefield. In these instances, investigation panels set up by host institutions were perceived to have conflicts of interest. Regulators and institutions have often minimised the serious nature of allegations, and justified, or condoned, irregularities.
After initial prevarications by the Seoul National University, and national expressions of outrage, Hwang was eventually dismissed (along with five collaborators), resigned prestigious appointments; was convicted of embezzlement and improper use of research funds; the two papers in Science were retracted; and honours conferred on him were recanted (Bonettta, 2006[3]; Lancet, 2006[19]; Slesser and Qureshi, 2009[30]).
In 1974, William Summerlin (Skin cancer researcher, Sloan-Kettering Cancer Centre, New York) was found to have faked results of tissue culture experiments by artificially darkening skin implants in mice to imply the success of genetically incompatible skin transplant experiments (McBride, 1974[26]). Although he was eventually discredited, his fraud was attributed to supposed mental illness — a ploy used in the past to shield institutional lapses that lead to misconduct.
Ongoing investigations by the Institute of Medicine reveal that in the case of Anil Potti, Duke University was slow to respond, and did not provide the NIH inquiry panel adequate access to details of allegations; journals were reluctant to pursue allegations; Potti and Duke had financial conflicts of interest; and multiple lapses at Duke permitted this episode (The Economist, 2010[38]).
There are notable exceptions, as occurred with Martin Pearce, Jon Sudbø, and Scott Reuben, when prompt institutional and regulatory action followed notification of allegations of misconduct.
Investigating all research publications of researchers found guilty of fraud is expensive and time consuming. Considering that many researchers are repeat offenders, it is necessary to view as potentially fraudulent all previous research involving the author, but proving misconduct may be expensive, and difficult, especially when the research was done a long time ago. Co-workers may not be available to question, may be unwilling to contribute information, may have insufficient recall of details of events, or material may be unavailable or tampered with (Smith, 2006[34]).
Establishing who should investigate is difficult and varies across jurisdictions. Employers or the host institution, funders of research, regulatory bodies, and professional societies are the appropriate agencies to investigate and indict or clear researchers accused of misconduct; but unwillingness to act, competing interests, insufficient expertise or resources, and a lack of established mechanisms to respond speedily may subvert the process (Smith and Godlee, 2005[33]).
National mechanisms to deal with research fraud vary and are insufficient or nonexistent in many countries. Research ethics ought to flow from good ethical practices in routine clinical care. Many countries, particularly in North America, Europe, and Australia, have established national and institutional regulatory mechanisms to ensure the ethical and scientific conduct of clinical practice and scientific research and to deal with education about, prevention, and dealing with misconduct in both spheres of activity, with varying degrees of success.
The Indian Scene. However, many countries lack mechanisms to influence, regulate, and investigate fraud in health services and health research. For example, there is no national agency or mechanism that investigates and adjudicates on the conduct of scientists in India. The Medical Council of India is the body that ought to discipline erring doctors, but has never seen this as its role, and has no jurisdiction over nonmedical scientists. The Indian Council of Medical Research claimed inability, in the BMJ investigation of the Ram Singh paper, to investigate research not funded by the ICMR. The Office of the Drugs Controller General of India does investigate the conduct of licensing applications made to the office but it is unsure if any instances of scientific misconduct have been detected or followed up; the Drugs and Cosmetics Act also does not have any punitive provisions, thereby limiting the powers of this office. Unless the various draft bills pending approval in parliament are approved to regulate medical establishments and clinical research, wrongdoers will continue to perpetrate and flourish. The fate of the current agitations on the anti-corruption Lokpal Bill in India will determine to a large extent the fate of pending legislation regarding scientists accused of misconduct.
In India, the National Human Rights Commission has the powers of a civil court and can summon records in the public domain. It can initiate investigations into allegations of violations of human rights or their abetment, or negligence by a public servant in preventing such violations. If undertaking research on human subjects and falsifying results, or publishing fraudulent research in the public domain that can harm others, is rightfully considered a violation of the human rights of numerous people, then the NHRC is the only agency in India with the mandate and power to investigate allegations of misconduct, but needs to specifically include this function in their scope of activities.
Even then, the full cooperation of institutions and individuals involved would be required. Unfortunately, it is highly doubtful whether most such institutions in India, or in other resource constrained countries, have established processes or mechanisms, or the will, to deal with issues pertaining to breaches of research integrity. The situation becomes more complex when the accused is also part of the management, or the head, as in the case of Singh, of a private institution.
Many of the risk factors that set the stage for scientific misconduct to occur are systemic (Altman and Melcher, 1983[2]; Dingell, 1993[6]; Horton, 2005[15]; Bonetta, 2006[3]; Smith, 2006[34]; White, 2005[40]; Marcovitch, 2007[23]; Kumar 2010[17]; Godlee et al., 2011[11]; Marcovitch, 2011[24]) and include:
- a)The unrealistic societal and academic expectations from the results of scientific research, leading to research environments with emphasis on quantity rather than quality and integrity, and a competitive rather than collaborative ethos; and unrealistic pressures to publish for academic advancement, securing competitive research grants, and fulfilling funding or institutional performance requirements.
- b) The dislike by scientists and journals for negative results, and findings that contradict established beliefs and expectations; and a bias in favor of novel findings and new products, instead of the “truth”.
- c) Inadequate attention to meticulous documentation and quality assurance, and a propensity on the part of supervisors and reviewers to be vigilant about errors and bias but less often about deception and falsification, or fabrication. There is also lax or nonexistent supervision of young researchers, especially where supervisors are negligent or over-committed. Many mentors neither set, nor practice, high standards of research integrity, but seem to value academic survival and shrewd financial arrangements.
- d) Senior researchers with strong convictions and monumental egos (academic conflicts of interest); and financial conflicts of interest and links with industry or the politics of science.
- e) Inadequate institutional arrangements for open, free, and frequent research ethics discussions; and inadequately trained and qualified institutional review boards to adequately assess the scientific as well as ethical aspects of research, for responsible monitoring and audit of research, and for research governance.
- f) Lax attitudes to the responsibilities of authorship and widespread acceptance of “gift authorship”; inadequacies in the peer review process, in journal editorial policies and arrangements to prevent and deal with scientific misconduct, and to correct the scientific record when misconduct is proven.
- g) A variation of “gift authorship” that is more pernicious is “ghost authorship”, where pharmaceutical companies hire public relations firms who “ghost-write” articles, editorials, and commentaries under the names of eminent clinicians. This practice is quite common, and in one survey of a cohort of 44 industry-initiated trials, the ghost author was only mentioned in the acknowledgement section in 91% of trials surveyed, and the ghost author was not named in the publication at all in 75% (Gotzsche et al., 2007[12]).The sad saga of the rosiglitazone (Avandia®) scandal illustrates exactly how pernicious this practice actually is. Congressional investigations and independent evaluations of allegations that GlaxoSmithKline deliberately used strategies to minimize or misrepresent findings that Avandia® actually increased cardiovascular risk, revealed a series of events that sought to downplay the cardiovascular risks of Avandia® in publications, and position statements of academic organizations; where conflicts of interest, and financial ties with industry were either not disclosed by some authors, or influenced the consensus statements of the organizations; and that some publications were drafted by industry writers, though academics were listed as authors (Moynihan, 2010[27]).Independent, systematic enquiry also revealed strong links between publications supporting rosiglitazone and disclosed and undisclosed financial conflicts of interest with pharmaceutical companies (Wang et al., 2010[39]).
- h) Journal editorial policies regarding prospective trials registration; submission of manuscripts in accordance with international reporting guidelines with the use of checklists to aid peer-review; requiring trials to be preceded or justified by a well-conducted systematic review; and following the uniform requirements for manuscript submission of the International Committee of Medical Journal Editors regarding ethical and scientific reporting and declaring financial and other conflicts of interest – all these are publishing safeguards that are often preserved in the breach rather than in their observance (Tharyan, 2011[37]).Journal editorial leadership is also often conflicted by financial and other conflicts of interest that dictate acceptance of manuscripts, with an inordinate amount of attention paid to journal impact factors and the citation potential of manuscripts submitted, and to the revenue generated by sales of reprints, particularly of industry-sponsored trials (Lundh et al., 2010[22]). There is also inadequate attention paid to building a pool of trained and committed peer-reviewers, enlisting the services of statisticians to check the veracity of data and statistical analyses, or otherwise fulfilling the obligations and responsibilities dictated by good publication ethics (Young, 2009[41]).
- i) The many inadequacies in current regulatory processes, as was evident in the controversies surrounding the delayed withdrawal of drugs such as rofecoxib and rosiglitazone, even though concerns of safety were expressed many years earlier. Regulatory agencies appear unable to remain free of industry pressures; appear to not have required more robust evidence of efficacy and long-term safety before approving new drugs; and to have not acted expeditiously when concerns were raised about the safety of drugs they had approved (Cohen, 2010[4]; Graham and Gelperin, 2010[13]; Moynihan, 2010[27]). This has led to calls to review the functioning of regulatory agencies and to separate drug approval from post-marketing pharmaco-vigilance, in the interests of increased efficiency and to reduce conflicts of interest (Garattini and Bertele, 2010[10]).
The Impact of Fraudulent Research
While science does eventually purge itself of unverified findings, considerable damage can and does occur in the interim. Over 240 citations of John Darsee’s fraudulent work existed by the time he was exposed, including chapters in influential cardiology textbooks (Lafollette, 2000[18]). The 1992 Ram Singh paper had already been cited 225 times in other research publications when the BMJ investigation commenced (White, 2005[40]).
Systematic enquiries reveal that fraudulent papers are difficult to differentiate from scientifically valid research; are often not retracted or are retracted slowly, particularly when senior authors are involved; are difficult to readily identify as retracted; continue to be cited, often with the same frequency as un-retracted papers; for many years after retraction (even as long as 24 years later, as was seen with retracted papers by Breuning); and often require separate alerts in journals and the media to reduce further citations (Lock, 1995[21]; Bonetta, 2006[3]; Korpela, 2010[16]; Steen, 2011[36]).
While Mendel’s and Newton’s experiments may have been doctored, their theories and results have been validated. It is also generally agreed that Sir Cyril Burt’s falsified research did not cause harm to patients or to scientific theory (Hamblin, 1981[14]). But such is not always the case with fraudulent research.
Andrew Wakefield’s fraudulent article on the adverse effects of MMR vaccine, the subsequent unbalanced media campaign and delays in settling the controversy, is credited with reduced immunisation rates in the UK and elsewhere, and with exposing numerous unimmunised children to measles and other vaccine preventable diseases (Godlee et al., 2011[11]).
The fraudulent publications of Potti and Nevins led the National Cancer Institute to commission three clinical trials at Duke based on this research in 2006. In 2009, NCI biostatistician Lisa McShane’s concerns of the research underlying the trials (after 300-400 hours of effort) led to an NCI appointed external inquiry; the three trials were halted but were resumed after the inquiry cleared Duke. However, the more complete investigation that followed the scandal of Potti’s claimed Rhodes scholarship led in 2010 to four papers (in Nature Medicine, two in NEJM, and the Journal of Clinical Oncology) by Potti and Nevins to be retracted; and the three trials at Duke were terminated (Marcovitch, 2011[24]).
A conservative estimate of the influence of fraudulent research papers identified as retracted in PubMed between 2000 and 2010 revealed 180 clinical research papers were cited over 5000 times and enrolled 28,000 participants. Moreover, over 400,000 participants were enrolled in 851 secondary studies which cited a retracted paper (Steen, 2011[36]).
A systematic review of the influence of Scott Reuben’s fraudulent research in other systematic reviews on pain management that cited his work revealed that some quantitative reviews (meta-analyses) reached erroneous conclusions due to inclusion of Reuben’s data, and qualitative reviews were particularly likely to be influenced by fraudulent data (Marret et al., 2009[25]).
Given the likelihood that many fraudsters are repeat offenders, and that not always are their previous publications investigated, it is worrying that these numbers reflect only a fraction of the actual number of fraudulent primary research publications. Also inestimable is the number of secondary publications that cite them or test the effects of supposedly effective and safe treatments on unsuspecting research subjects. Even if not evaluated in further research, using the results of fraudulent research may result in harm, or deny people the benefits of alternative and better-proven interventions.
Those found guilty of misconduct also pay a heavy price: Paul Kammerer killed himself barely two months after being exposed (Hamblin, 1981[14]). Many promising researchers like Vijay Soman, John Darsee, Amitav Hajra, Anil Potti, Malcom Pearce, Scott Reuben and others had their careers blighted by their follies. Many established researchers lose their jobs and prestige. Others, like Hwang, and Poehlman, also faced charges of embezzlement due to improper use of research funds. The fall from grace is long and heavy for all fraudulent researchers exposed, and though some, like Ram. B. Singh, continue to publish, their research careers no longer have the trajectories, or their publications the impact, which they earlier did.
In addition, considerable public and private financial resources, time and effort, are wasted by fraudulent research, and in investigating misconduct. Money, time and effort are diverted from finding and using other legitimate interventions. The hopes and trust of patients and carers, as well as of unsuspecting friends, colleagues, and the family of the perpetrators, are betrayed. Innocent co-workers and co-authors are implicated, and even if cleared of wrongdoing, are tainted by scandal, or lose their jobs. The reputations of institutions are tarnished, and trust in the methods and integrity of scientific research is seriously damaged (Bonettta, 2006[3]).
Re-defining Research Misconduct is the First Step in Dealing with the Irresponsible Conduct and Reporting of Research
Research misconduct is currently defined, particularly in the US, as fabrication, falsification and plagiarism that is proved by a preponderance of evidence to have been committed intentionally, knowingly, or recklessly and that is seen as a significant departure from accepted research practices. This pragmatic definition is aimed at conclusively demonstrating devious intent and serious misconduct, prior to taking legal action, and hence uses the language, methods and yardsticks employed in the criminal justice system. This definition resulted from political debates, and political and pragmatic considerations, based on US Congressional sub-committee hearings into scientific misconduct in the 1980s and was refined with use (Dingell, 1993[6]; Smith, 2006[34]; Lafollette, 2000[18]).
However, definitions used in parts of Europe dispense with the need to prove intent, and the even broader UK definition includes any unethical and unscientific conduct, at the expense of demonstrating “serious” or “significant” departures from norms, or the intent to deceive (Smith, 2006[34]).
It appears that perceptions of the serious nature of misconduct and their effects varies between different kinds of researchers, and while honest error can and does occur in research, many claimed instances of inadvertent omissions or commissions are possibly attempts at falsification, though proving malicious intent is often difficult. Condoning errors as inevitable serves only to increase laxity in research governance, and disinvests everyone from increasing vigilance to prevent their occurrence (Dingell, 1993[6]; Nylenna and Simonsen, 2006[29]).
The broader definition captures more realistically the extent of the problem and could be used to develop strategies to promote integrity in research and clinical practice, rather than focus purely on defining the boundaries for initiating legal action envisaged in the retributive approach.
Incompetence or ignorance in following established norms and procedures in designing, conducting, ensuring the quality of data, analysing, interpreting, reporting and disseminating research results, or declaring academic and financial conflicts, is unethical, since the trust imposed in researchers assumes competence to conduct research, knowledge of research methods, and deployment of the highest ethical standards relevant to the context. Such research can result in the same deleterious effects on health outcomes, and on trust in the integrity of research, as are seen with research designs that are intentionally designed or distorted to deceive.
For example, empirical evidence, reviewed in Tharyan (2011),[37]indicates that inadequate methods used in trials to prevent or minimise the risk of bias (in selecting participants, particularly inadequate concealment of treatment allocation; in performing the trial; in detecting outcomes; in properly analysing and interpreting results; and in reporting them as performed) are associated with erroneous and unpredictable treatment effects, particularly when subjectively reported outcomes are used. In addition, trials are often deliberately designed to favor the experimental interventions in their choice of comparators, where head to head comparisons of active interventions for a given condition are often avoided, and comparisons are usually against placebo; or the comparator is used at ineffective doses or schedules; or is known to be less effective; or is used in toxic doses. The choice of outcomes is also often carefully chosen to ensure statistically significant results in advance (such as rating scales used primarily in research and hardly ever in routine clinical practice, the use of surrogate outcomes, and composite outcomes) at the expense of clinically relevant or clinically important outcomes, or outcomes important to patients. Studies that report positive or significant results are more likely to be published; and outcomes that are statistically significant have higher odds of being fully reported, particularly in industry-funded trials. Published reports are not always consistent with their protocols, in terms of outcomes, as well as the analysis plan, and this again is determined by the significance of the results. Harms are very poorly reported in trials compared to results for efficacy; and are also often suppressed or minimised (Tharyan 2011[37]).
Such studies are usually accompanied by the clear intent to falsify, if not fabricate, research evidence due to various internal and external pressures that may be difficult to prove in public investigations (Steen, 2011[36]; Tharyan, 2011[37]).
Many of the issues detailed above, particularly the use of surrogate outcomes and “cherry-picked” reporting of composite primary outcomes; unexplained attrition and inadequate analyses of those lost to follow-up; inadequate attention to confounders; studies underpowered to detect harms; selective reporting and the manipulation of data to present favorable risk-benefit estimates — contributed to misinformation in the industry-sponsored publications supposedly attesting to the safety of rosiglitazone (Freemantle, 2010a[8]).
Research misconduct should, therefore, be viewed as occurring on a continuum ranging from ignorance and incompetence at one end to intentional deceit at the other, with honest errors and differences of opinion occupying the lower rather than the upper extreme of the continuum (Nylenna and Simonsen, 2006[29]; Tharyan, 2011[37]). Narrower definitions aimed at initiating legal action are useful, but would also benefit from incorporating considerations of harm resulting from research misconduct in deciding culpability, and informing subsequent corrective or punitive actions, as is common in the criminal justice model, rather than only proving serious departures from (questionable) norms and the intent to deceive.
Individual as well as Systems-centered Approaches are Required to Prevent and Deal with Research Misconduct
Prevention of scientific misconduct may never succeed completely, due to human frailty and avarice, but could be reduced in frequency and magnitude if efforts were directed at individuals and the systems involved in the production, nurture and governance of research. These combined approaches also need to be evaluated to assess their success in improving the climate of responsibility and accountability in the conduct of research, and to identify barriers and facilitators (Nylenna and Simonsen, 2006[29]; Kumar, 2010[17]).
These approaches should target individuals involved in proposing, funding, designing, reviewing, educating, and mentoring, conducting, collaborating, supervising, monitoring, facilitating, governing and regulating research. Approaches should simultaneously aim to facilitate the development of systems and environments that foster openness, transparency, collaboration, discussion, education, facilitation, accountability, and that seek the “truth”, rather than those that exploit, pressurise, favor or disfavor unfairly, compete, ignore or collude with dubious practices, solely punish, or that profit from research by peddling unproven remedies at the expense of human suffering.
Concluding Remarks [See also Figure 1: Flowchart of Paper]
Figure 1.
Flowchart of paper
The aspirations of scientific research, and the technological advances at our disposal, combined with the pressures of society’s rapacious appetite for longer lives, better health, miraculous remedies, and technology-driven solutions, result in research that is often done for reasons other than the welfare of patients, or the mitigation of suffering caused by disease, the ravages of advancing age, and the stresses of modern life.
While many scientists and researchers remain true to their calling and many industry sponsored research initiatives have provided life-saving drugs, lacunae in research governance exist that need to be addressed in order that research evidence can be better trusted. Genuine differences of opinion exist on how best this can be achieved (Freemantle, 2010b[9]). This article selectively highlights some known problems and reiterates suggestions regarding an overall approach that shifts the focus from the currently prevailing retributive approach aimed at individuals, to a broader, systems-based and facilitative approach.
Financial and other conflicts of interest can subvert the research agenda, but requires the avarice and complicity of academia, in order to succeed. Researchers navigate the continuum of scientific misconduct throughout their careers and need to guard against sliding down the slippery slope that leads inexorably to overt misconduct. The greater awareness and sharper vigilance in exposing scientific deceit, misconduct and fraud that has been increasingly evident over the past decade, should serve as a warning to all scientists and clinical researchers that the seductive allure of academic or financial advantage seemingly offered by sloppy, conflicted, fraudulent, or biased research, does not assure immunity against detection, shameful publicity, social and professional ostracism, and scientific exile.
The charge to young researchers and to institutional mechanisms that nurture and govern research is that if health outcomes are to improve though scientific research, it is more important to do it right rather than to just do it.
Take home message
Scientific and research misconduct occurs on a continuum ranging from inexcusable ignorance, through sloppy methods and errors, to more calculated attempts to falsify and deceive, and the extreme and rarer instances of wholesale fabrication of data and the research itself.
Integrity in research depends on institutional safeguards and a facilitatory environment wherein the emphasis is on research integrity and quality over productivity, with mentors who facilitate respect for integrity in research methods even at the expense of academic or financial success. This requires a collaborative rather than competitive approach, and, above all, an appreciation of the purpose of health science research as outlined in the definition that introduced this article.
Questions that the Paper Raises
Can research integrity and adherence to ethical principles in research ever be achieved if little attention is paid to ethics and integrity in routine clinical care?
Should misconduct be decided by the degree of intent to deceive, or the harm (or the potential for harm) caused by the results of misleading (incompetent, biased, or conflicted) or fraudulent research? In other words, should intent define the occurrence of misconduct or merely the severity of the moral failing?
Are there warning signals that can alert one to the propensity for research misconduct in a research group or an individual researcher?
If the current emphasis on research productivity is to reduce, and greater attention be paid to quality and integrity, what mechanisms should replace the current methods of rewarding researchers with prolific publication histories?
What approaches can replace the current approach to research misconduct that is based on detecting fraudulent research (largely dependent on the courage and persistence of “whistle-blowers”), and punishing those proven to have committed misconduct?
What role do medical journals play in contributing to the climate that fosters research misconduct?
What are the safeguards that journal editors can use to ensure that fraudulent or deceptive research is not published, or continue to remain, in their journals?
About the Author
Prathap Tharyan, MD, MRCPsych, is Professor of Psychiatry at the Christian Medical College, Vellore, Tamil Nadu, India. He is also the Director of the South Asian Cochrane Network & Centre (http://www.cochrane-sacn.org), an independent centre of The Cochrane Collaboration (http://www.cochrane. org), an international organization of individuals and institutions that prepares, maintains, and disseminates the results of systematic reviews of interventions used in healthcare, and of diagnostic test-accuracy. His work includes training and mentoring systematic review authors from South Asia to answer clinical questions of relevance to healthcare in the region, and attempting to work with policy makers to use reliable evidence to contextualize local health policy. He is also involved with improving the scientific and ethical quality of primary research, and the editorial leadership of medical journals that publish primary research, in India and countries in the South Asian Region.
Footnotes
Conflict of interest: The author is a contributor to the Cochrane Collaboration and has received research funding, travel support and hospitality from many organizations that support evidence-informed healthcare. He is a salaried employee of the Christian Medical College, Vellore. His work is currently funded by the Indian Council of Medical Research (http://icmr.nic.in/) and the Effective Healthcare Research Consortium (http://www.liv.ac.uk/evidence/) through a programme grant from the UK Department for International Development (DFID). He declares no financial conflicts of interest.
Declaration
I declare that the material and perspectives in this paper, not attributed by citations to published work in the public domain documenting examples of scientific misconduct and the lessons learned, are my own opinions that do not necessarily reflect the views of any of my funders, or my employer. This paper is an invited elaboration of a blog-post “High crimes, deceit, and piracy in international health research” posted on October 14, 2011 at http://evidence-informed-musings.blogspot.com/. This manuscript has not been submitted for publication elsewhere.
CITATION: Tharyan P. Criminals in the Citadel and Deceit all Along the Watchtower: Irresponsibility, Fraud, and Complicity in the Search for Scientific Truth. Mens Sana Monogr 2012; 10: 158-80.
References
1. Al-Marzouki S, Evans S, Marshall T, Roberts I. Are these data real.? Statistical methods for the detection of data fabrication in clinical trials. BMJ. 2005;331:267–70. doi: 10.1136/bmj.331.7511.267. PMID: 16052019. [DOI] [PMC free article] [PubMed] [Google Scholar]
2. Altman L, Melcher L. Fraud in science. Br Med J (Clin Res Ed) 1983;286:2003–6. doi: 10.1136/bmj.286.6383.2003. PMID: 6409204. [DOI] [PMC free article] [PubMed] [Google Scholar]
3. Bonetta L. The aftermath of scientific fraud. Cell. 2006;124:873–5. doi: 10.1016/j.cell.2006.02.032. PMID: 16530031. [DOI] [PubMed] [Google Scholar]
4. Cohen D. Rosiglitazone: What went wrong? BMJ. 2010;341:c4848. doi: 10.1136/bmj.c4848. PMID: 20819889. [DOI] [PubMed] [Google Scholar]
5.Committee on Publication Ethics. Code of conduct and best practice guidelines for journal editors. 2011. [Last accessed on 19 Dec 2011]. Available from: http://publicationethics.org/files/Code%20of%20conduct%20for%20journal%20editors_0.pdf
6. Dingell JD. Shattuck Lecture-misconduct in medical research. N Engl J Med. 1993;328:1610–5. doi: 10.1056/NEJM199306033282207. PMID: 8487803. [DOI] [PubMed] [Google Scholar]
7. Fanelli D. How many scientists fabricate and falsify research. A systematic review and meta-analysis of survey data? PLoS One. 2009;4:e5738. doi: 10.1371/journal.pone.0005738. PMID: 19478950. [DOI] [PMC free article] [PubMed] [Google Scholar]
8. Freemantle N. Commentary: What can we learn from the continuing regulatory focus on the thiazolidinediones? BMJ. 2010a;341:c4812. doi: 10.1136/bmj.c4812. PMID: 20819888. [DOI] [PubMed] [Google Scholar]v
9. Freemantle N. Commentary: Journals must facilitate the dissemination and scrutiny of clinical research. BMJ. 2010b;341:c5397. doi: 10.1136/bmj.c5397. PMID: 20940214. [DOI] [PubMed] [Google Scholar]
10. Garattini S, Bertele V. Rosiglitazone and the need for a new drug safety agency. BMJ. 2010;341:c5506. doi: 10.1136/bmj.c5506. PMID: 20926483. [DOI] [PubMed] [Google Scholar]
11. Godlee F, Smith J, Marcovitch H. Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ. 2011;342:c7452. doi: 10.1136/bmj.c7452. PMID: 21209060. [DOI] [PubMed] [Google Scholar]
12. Gotzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, Chan AW. Ghost authorship in industry-initiated randomised trials. PLoS Med. 2007;4:e19. doi: 10.1371/journal.pmed.0040019. PMID: 17227134. [DOI] [PMC free article] [PubMed] [Google Scholar]
13. Graham DJ, Gelperin K. FDA on rosiglitazone. More on advisory committee decision. BMJ. 2010;341:c4868. doi: 10.1136/bmj.c4868. PMID: 20823021. [DOI] [PubMed] [Google Scholar]
14. Hamblin TJ. Fake. Br Med J (Clin Res Ed) 1981;283:1671–4. doi: 10.1136/bmj.283.6307.1671. PMID: 6797607. [DOI] [PMC free article] [PubMed] [Google Scholar]
15. Horton R. Expression of concern: Indo-Mediterranean Diet Heart Study. Lancet. 2005;366:354–6. doi: 10.1016/S0140-6736(05)67006-7. PMID: 16054927. [DOI] [PubMed] [Google Scholar]
16. Korpela KM. How long does it take for the scientific literature to purge itself of fraudulent material.: The Breuning case revisited? Curr Med Res Opin. 2010;26:843–7. doi: 10.1185/03007991003603804. PMID: 20136577. [DOI] [PubMed] [Google Scholar]
17. Kumar MN. A theoretical comparison of the models of prevention of research misconduct. Account Res. 2010;17:51–66. doi: 10.1080/08989621003641132. PMID: 20306348. [DOI] [PubMed] [Google Scholar]
18. Lafollette MC. The evolution of the “scientific misconduct” issue: An historical overview. Proc Soc Exp Biol Med. 2000;224:211–5. doi: 10.1177/153537020022400405. PMID: 10964254. [DOI] [PubMed] [Google Scholar]
19. Lancet. Writing a new ending for a story of scientific fraud. Lancet. 2006;367:1. doi: 10.1016/S0140-6736(06)67896-3. PMID: 16399129. [DOI] [PubMed] [Google Scholar]
20. Lock S. Fraud in medicine. Br Med J (Clin Res Ed) 1988;296:376–7. doi: 10.1136/bmj.296.6619.376. PMID: 3125906. [DOI] [PMC free article] [PubMed] [Google Scholar]
21. Lock S. Lessons from the Pearce affair: Handling scientific fraud. BMJ. 1995;310:1547–8. doi: 10.1136/bmj.310.6994.1547. PMID: 7787632. [DOI] [PMC free article] [PubMed] [Google Scholar]
22. Lundh A, Barbateskovic M, Hrobjartsson A, Gotzsche PC. Conflicts of interest at medical journals: The influence of industry-supported randomised trials on journal impact factors and revenue – cohort study. PLoS Med. 2010;7:e1000354. doi: 10.1371/journal.pmed.1000354. PMID: 21048986. [DOI] [PMC free article] [PubMed] [Google Scholar]
23. Marcovitch H. Misconduct by researchers and authors. Gac Sanit. 2007;21:492–9. doi: 10.1157/13112245. PMID: 18001665. [DOI] [PubMed] [Google Scholar]
24. Marcovitch H. Is research safe in their hands? BMJ. 2011;342:d284. doi: 10.1136/bmj.d284. PMID: 21248020. [DOI] [PubMed] [Google Scholar]
25. Marret E, Elia N, Dahl JB, McQuay HJ, Moiniche S, Moore RA, et al. Susceptibility to fraud in systematic reviews: Lessons from the Reuben case. Anesthesiology. 2009;111:1279–89. doi: 10.1097/ALN.0b013e3181c14c3d. PMID: 19934873. [DOI] [PubMed] [Google Scholar]
26. McBride G. The Sloan-Kettering affair: Could it have happened anywhere ;229:1391-9, 402-5, 409-10? (402-5, 409-10).JAMA. 1974;229:1391–1. doi: 10.1001/jama.229.11.1391. PMID: 4620913. [DOI] [PubMed] [Google Scholar]
27. Moynihan R. Rosiglitazone, marketing, and medical science. BMJ. 2010;340:c1848. doi: 10.1136/bmj.c1848. PMID: 20375091. [DOI] [PubMed] [Google Scholar]
28. Nylenna M, Horton R. Research misconduct: Learning the lessons. Lancet. 2006;368:1856. doi: 10.1016/S0140-6736(06)69757-2. PMID: 17126705. [DOI] [PubMed] [Google Scholar]
29. Nylenna M, Simonsen S. Scientific misconduct: A new approach to prevention. Lancet. 2006;367:1882–4. doi: 10.1016/S0140-6736(06)68821-1. PMID: 16765743. [DOI] [PubMed] [Google Scholar]
30. Slesser AA, Qureshi YA. The implications of fraud in medical and scientific research. World J Surg. 2009;33:2355–9. doi: 10.1007/s00268-009-0201-5. PMID: 19701662. [DOI] [PubMed] [Google Scholar]
31. Sox HC, Rennie D. Research misconduct, retraction, and cleansing the medical literature: Lessons from the Poehlman case. Ann Intern Med. 2006;144:609–13. doi: 10.7326/0003-4819-144-8-200604180-00123. PMID: 16522625. [DOI] [PubMed] [Google Scholar]
32. Smith R. Investigating the previous studies of a fraudulent author. BMJ. 2005;331:288–91. doi: 10.1136/bmj.331.7511.288. PMID: 16052023. [DOI] [PMC free article] [PubMed] [Google Scholar]
33. Smith J, Godlee F. Investigating allegations of scientific misconduct. BMJ. 2005;331:245–6. doi: 10.1136/bmj.331.7511.245. PMID: 16051990. [DOI] [PMC free article] [PubMed] [Google Scholar]
34. Smith R. Research misconduct: The poisoning of the well. J R Soc Med. 2006;99:232–7. doi: 10.1258/jrsm.99.5.232. PMID: 16672756. [DOI] [PMC free article] [PubMed] [Google Scholar]
35. Steen RG. Retractions in the scientific literature: Do authors deliberately commit research fraud? J Med Ethics. 2010;37:113–7. doi: 10.1136/jme.2010.038125. PMID: 21081306. [DOI] [PubMed] [Google Scholar]
36. Steen RG. Retractions in the medical literature: How many patients are put at risk by flawed research? J Med Ethics. 2011;37:688–92. doi: 10.1136/jme.2011.043133. PMID: 21586404. [DOI] [PubMed] [Google Scholar]
37. Tharyan P. Evidence-based medicine: Can the evidence be trusted. Indian J Med Ethics. 2011;8:201–7. doi: 10.20529/IJME.2011.081. OMID: 22106657. [DOI] [PubMed] [Google Scholar]
38. The Economist. Misconduct in Science. An array of errors. [Last accessed 6 Dec 2011];The Economist. 2010 Sep 10; Available from: http://www.economist.com/node/21528593 . [Google Scholar]
39. Wang AT, McCoy CP, Murad MH, Montori VM. Association between industry affiliation and position on cardiovascular risk with rosiglitazone: Cross sectional systematic review. BMJ. 2010;340:c1344. doi: 10.1136/bmj.c1344. PMID: 20299696. [DOI] [PMC free article] [PubMed] [Google Scholar]
40. White C. Suspected research fraud: Difficulties of getting at the truth. BMJ. 2005;331:281–8. doi: 10.1136/bmj.331.7511.281. PMID: 16052022. [DOI] [PMC free article] [PubMed] [Google Scholar]
41. Young SN. Bias in the research literature and conflict of interest: An issue for publishers, editors, reviewers and authors, and it is not just about the money. J Psychiatry Neurosci. 2009;34:412–7. PMID: 19949717. [PMC free article] [PubMed] [Google Scholar]