August 2017 Contents


Ethics in Research and Clinical Trials 

Scientific integrity and high standards are essential to the pharmaceutical sciences. 

By Mark Crawford

In recent years, there has been an upward trend in falsified results appearing in medical research papers—both as willful fraud and unreliable/irreproducible research. “Research misconduct is certainly more prominent in the media these days,” states Peter Swaan, Ph.D., professor of pharmaceutical sciences at the University of Maryland in Baltimore and editor-in-chief of Pharmaceutical Research. “This does not mean that all scientists commit fraud. It is a small subsection of the scientific community. Fraudulent papers are like crime rates, they go up and down.” 

For example, the U.S. Department of Health and Human Services’ Office of Research Integrity (ORI) reported 13 cases of research fraud in 2015 and only seven in 2016. The online summaries are peppered with comments such as “respondent knowingly falsified and/or fabricated data and related images by alteration and/or reuse and/or relabeling of experimental data,” or “respondent reused and falsely relabeled Western blot gel images,” or “respondent falsified and/or fabricated data for experiments that were not performed.”1 

Other cases of ethics violations can be found online at Retraction Watch

17 Aug2“When fraud is detected after the results have been announced, the negative impact on the perception of the results of the trial in question, as well as on the public’s perception of the clinical trial enterprise itself, can be profound,” states Christopher McCurdy, Ph.D., professor of medicinal chemistry at the University of Florida in Gainesville. 

These impacts include heavy blows to the integrity of evidence-based medicine,2 which often focus on the conflicts of interest of the pharmaceutical companies that finance many clinical trials. Others are concerned about the shoddy quality of medical research, which has been described by some as a scandal. 

“Even though publishers are finding more cases, we’re also seeing greater efforts to identify problems and clean up the published literature, and a broader awareness of the issues, across the publishing and research community,” says Maria Kowalczuk, a member of the Research Integrity team at Springer Nature, a global scientific publishing company. “We’re also seeing more creative ways to address these issues at source, such as registered reports, which can result in improvements both now and in the future.” 


Science is the pursuit of knowledge using a systematic methodology based on evidence. The hallmark of science is reproducibility. When results cannot be verified, the science behind the research becomes meaningless. The integrity and validation (rigor) of data/results are vital to moving projects forward and fundamental to “providing the public with safe and efficacious medications utilizing accurate and reproducible science,” says McCurdy. 

In contrast, if a flawed protocol becomes a basis for a clinical trial, “outcomes could be even catastrophic,” adds Kowalczuk.4 

Those who publish false or misleading data show tremendous disregard for their profession, as well as their colleagues. “Published data is relied upon by other researchers within the pharma industry, and by researchers as a whole,” says Ruchit Trivedi, associate director of formulation for Bioduro, a pharmaceutical contract research organization in San Diego, California. “Published results that are not accurate mislead other researchers and waste valuable time and funding, which is at a premium these days.” 

Ultimately, the greatest risk of falsified research data is that clinical decisions may be based upon that data, resulting in negative impacts on patient care. Not only can this harm patients, but it also delays by years the development and delivery of effective drugs to patients who are in desperate need of them.


A spectrum of behaviors and attitudes exist that compel some researchers to engage in questionable research practices, data omission, or downright fraud. It is often difficult to determine the exact reasons for falsifying data and whether individual cases are intentional or unintentional. 

“Sometimes I think it is due to arrogance—for example, they think they know the answer, but can’t prove it,” says Chris Bode, Ph.D., vice president of scientific and corporate communications for Absorption Systems in Exton, Pennsylvania. “Or it may have more to do with academic tenure, or a combination of these factors.” 

Financial pressures can contribute significantly to falsification or misrepresentation of clinical trial results. Funding is easier to get when articles with positive results are published in prestigious journals.

“Researchers falsify results to support their hypotheses, in most cases,” says Trivedi. “Negative results mean that their hypotheses are incorrect, which can hurt their funding chances at the academic level.” In fact, this is spelled out in a summary on the ORI website, where “the respondent engaged in research misconduct by falsifying and/or fabricating data in two publications. The respondent knowingly falsified and/or fabricated data and related text by altering the experimental data to support the experimental hypothesis.” 

Intense competition exists for funding—researchers who have been published in key journals have a significant advantage in winning research dollars. The “publish or perish” culture of scientific research has been cited as a factor that creates hyper-competition and misconduct, according to the Nuffield Council on Bioethics.5 

“In order to attract funding, you must convince your peers that you do significant and impactful work,” says Swann. “Impact is measured by the journals you publish in and their impact factors. A journal with a high impact factor [Science, Nature, Cell, etc.] accepts less than one percent of submissions and also looks at news value. The more spectacular the data, the higher the chance of getting published. Hence, some people take shortcuts or commit fraud to get published there. Interestingly, the highest number of retractions takes place in these high-impact journals.”


The issue of reproducibility is a major concern for both the pharmaceutical industry and federal agencies, such as the National Institutes of Health (NIH). The main factors that affect reproducibility are deliberate falsification and nondeliberate lack of rigor. 

“Non-reproducible data is a greater time-and-money-waster than data that is deliberately falsified,” says Bode. “This is usually due to unreported differences in technique, reagents, reagent handling, and sometimes data analysis.”

Impaired reproducibility related to data analysis is often the result of a researcher’s deliberate act of data omission—using only the “good” data that confirms the hypothesis and leaving out the “bad” data that does not. Some researchers cut corners or falsify data because they “still believe that journals will only publish positive results,” says Stephanie Boughton, also a member of Springer Nature’s Research Integrity team. “While this is untrue—many journals publish so-called negative results—the perception may cause researchers to selectively report the more positive data in order to produce positive results.” 

It can be difficult to be ethical at all times; researchers can fall prey to “massaging” their data to fit their hypothesis. Younger scientists are especially vulnerable—for example, succumbing to the pressure that faculty members may put on students to bring them the right results. “I know of a case where the principal investigator kept sending a student back to the lab because she couldn’t believe that the data were real,” relates Swann. “If you do that often enough, the student may just give you what you expect to see. This is a terrible way to advise or mentor students, but it happens, more often than you think.”


Much of the reported academic research is validated only internally (if at all), making it difficult for other researchers to reproduce. This also makes it extremely challenging for peer reviewers to know if they are reviewing good data or not.

“Data integrity is always a concern for me because, as an editor, I never really know how good the data is,” says Robert Williams III, Ph.D., professor of molecular pharmaceutics and drug delivery at the University of Texas-Austin and editor-in-chief of AAPS PharmSciTech. “Therefore I must rely on a robust peer-review process to critically evaluate the data and results. I’m most comfortable choosing reviewers with a track record of thoroughness of their review; unfortunately I tend to overwork them, because I can rely on their opinion.”

Two key areas where editors can have an impact in assessing data are: 1) requiring replication of key experiments by the submitting lab before manuscripts are accepted for publication, and 2) specifying a more complete materials and methods section, perhaps as a supplemental section available online.

More industry and regulatory professionals are calling for researchers to make their raw data available for peer review and to share the data underlying the conclusions upon publication; this way the research community can verify it and build on it (for example, Force116). Some have suggested that another laboratory repeat the work to insure the reproducibility, before the results can be published. “This may seem like a great idea, but it is not practical on many levels,” says McCurdy.

Many journals now publish only research where the underlying data are made available—for example, PLoS and Genome Biology. Journals such as Data Science Journal and Scientific Data are devoted to publishing datasets. Solutions are also being developed to enable controlled sharing of sensitive data.7 “Publishers are making an effort to facilitate data sharing and data deposition, and provide guidance and assistance to authors—for example, Elsevier and Springer Nature,” says Kowalczuk. “However, many researchers still resist sharing their data for various reasons. There is also a lot that can be done to incentivize researchers to share their data.”8,9 

It is increasingly difficult to find reviewers who can perform a complete and thorough review of a paper. This is due in part to a lack of qualified reviewers with formal training in peer review. “In addition, the large amount of publication streams that are available today have increased the burden on reviewers to perform this service,” says McCurdy. “In other words, many journals are seeking reviews from those who have been established in a particular research area, so the amount of time this can involve is detrimental to the process.”

While peer reviewers can look for some of the signs of potential data falsification, peer review is not designed to identify data falsification or fabrication; it would be very difficult and time-consuming to add this role to the already weighed-down reviewer. “There is a large burden on researchers to act as peer reviewers, and they do so voluntarily,” says Boughton. “To expand their role is likely not feasible. Peer reviewers need to be able to trust that the researchers did what they say they did in order to confidently peer review the manuscript.”

“Yes, peer reviewers could do more,” adds Swaan, “but given the fact that we receive thousands of manuscripts, it becomes very hard to regulate. At the end of the day, we cannot and must not be a policing force, but rather help people question ‘strange data.’ This is already starting to become commonplace—when readers don’t trust the data in a paper, they often send a commentary to PubPeer. Type in Nature and see how many comments there are!”


Raising awareness of the problems surrounding falsified or misleading data, as well as improving education in ethics and good practice, are good ways for the pharmaceutical industry and its professional associations to tackle the serious issue of data fraud. In addition, ensuring that researchers are aware of and follow initiatives such as trial registration and publication of study protocols is also vital for reducing the frequency of data falsification or misrepresentation. “These help by creating a publicly available record of what the researchers intend to do before the study is started—for example, defining the primary and secondary outcomes, and what statistical tests will be conducted,” says Boughton. “This makes it much harder to falsify data when the researchers start to write up the research.” 

McCurdy indicates it would be beneficial if the industry would share more of its data about what has failed—not just in the clinic, but at all levels of the drug discovery and development process. This is starting to happen as new laws/requirements to share data take hold. “In fact,” says McCurdy, “NIH-funded projects must be made publicly available free of charge either immediately or after an embargo period. Increased access by the public and research communities to science publications could add additional pressure to report data accurately and validate results. That said, there will always be unethical researchers out there who are only interested in personal gain, and not the greater good.”

Transparency—for example, publishing the results of all trials and making patient-level data available, as well as independent verification—will make unethical researchers think twice about publishing falsified data. “In pharma,” notes Bode, “unintentionally misleading data can be caused by lack of target validation. With that in mind, I have long been a proponent of pre-competitive collaboration among different pharma companies which could, ideally, enable independent validation of therapeutic targets, for example.”

NIH has recently added rigor and reproducibility to its review process10 and also developed a required resource authentication plan for grant applications. “The NIH has worked with publishers to develop best practices for ensuring rigor and transparency,” says Andrew Coop, Ph.D., professor of pharmaceutical science and associate dean for academic affairs at the University of Maryland School of Pharmacy. “This is a wonderful approach to ensure that all research is sound and, whether intentional or not, that all data is reliable.”

Many funders, especially governmental/federal agencies, are including expectations of data sharing in the open-access mandates and policies they construct. “The goal is to make the results of publically funded research publically available, but data sharing also makes researchers more accountable and helps ensure integrity,” says Kowalczuk.

As collaboration among funding agencies, academia, and the private sector deepens, procedures for ensuring the quality of data in clinical trials, including but not limited to detection and treatment of data fraud, will also expand and mature. These procedures will be aided by the acceptance and use of advanced statistical tools.

“The development and use of cost-effective computer-based automated approaches for quality control and quality assurance will become even more important as the costs of clinical trials continue to rise,” write Stephen L. George and Marc Buyse in their 2015 article, Data Fraud in Clinical Trials in Clinical Investigations. “Central statistical monitoring techniques will be more heavily used, since they have much to offer in ensuring the data integrity of clinical trials. While these techniques will be used primarily by trial sponsors to suggest remedial actions during the trial, they will also prove valuable to provide regulatory agencies with an overall assessment of the data quality in trials submitted as part of a marketing authorization.”11 

The authors note that these cost-effective central statistical monitoring procedures, when included as part of an overall data quality assurance program, are likely to detect data fraud and other data-quality issues at an early, treatable time point during the trial. Simply the knowledge that such procedures are in place may be enough to deter fraud.

In addition, “statistical assessment of data quality may prove quite useful during peer review,” they write. “It seems likely and desirable that in the future, journal editors will increasingly request access to the source data upon which claims are made, and that peer review will routinely include a statistical assessment of data quality.”

Discussion Point 

We want to know your opinion! Please discuss this article via the related AAPS Blog post!


  1. Case Summaries webpage. The Office of Research Integrity website Accessed June 13, 2017.
  2. You S. Perspective and future of evidence-based medicine. Stroke and Vasc Neurology. 2016;e000032. doi:10.1136/svn-2016-000032.
  3. Smith R. Medical research—still a scandal. The BMJ Opinion. Published January 31, 2014. Accessed June 13, 2017.
  4. Callaway E, Butler D. Researchers question design of fatal French clinical trial. Nature. Published January 22, 2016. doi:10.1038/nature.2016.19221.
  5. Nuffield Council on Bioethics. The findings of a series of engagement activities exploring the culture of scientific research in the UK. Published December 2014. Accessed June 13, 2017.
  6. About Force11 webpage. Force11 website. Accessed June 13, 2017.
  7. Hrynaszkiewicz I, Khodiyar V, Hufton AL, Sansone S-A. Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations. Res Integrity and Peer Rev. 2016;1(6). doi:10.1186/s41073-016-0015-6. Accessed June 13, 2017.
  8. Ferguson L. How and why researchers share data (and why they don’t). Wiley Exchanges: Discover the Future of Res. Published November 3, 2014. Accessed June 13, 2017.
  9. Van den Eynden, V. and Bishop, L. Knowledge Exchange. Incentives and motivations for sharing research data, a researcher’s perspective. Published November 2014. Accessed June 13, 2017.
  10. Rigor and Reproducibility webpage. National Institutes of Health website. Accessed June 14, 2017.
  11. George SL, Buyse M. Data fraud in clinical trials. Clin Investig. 2015;5(2):161–173. doi:10. 4155/cli.14.116.

Mark Crawford, a freelance writer based in Madison, Wis., specializes in science and technology.