Most ICAP publications are available online, and many have been translated in part or in full into several languages.

Does Financial Conflicts of Interest Bias Research? An Inquiry into the “Funding Effect” Hypothesis

Invited Opinions  Conflicts of Interest  Does Financial Conflicts of Interest Bias Research

Does Financial Conflicts of Interest Bias Research?

An Inquiry into the “Funding Effect” Hypothesis

Sheldon Krimsky

It is generally understood that the production of scientific knowledge is accompanied by quality controls that are designed to filter out errors and bias.  Unlike other sources of establishing belief, science is considered to be a self-correcting enterprise where truth claims are kept open to new evidence.  No one doubts, however, that bias can enter into scientific work.  Sometimes the bias is built into the methodology and sometimes its subtlety can elude even the most careful investigator.  And while the protection against and response to bias has been a central part of the development of science, the connection between bias and conflicts of interest first drew attention during the last two decades of the 20th century.  Social scientists began investigating the relationship between financial COIs and bias in the mid-1980s, when author disclosures of their financial relationships were in their infancy.  The concept of a “funding effect” from COI was coined after a significant body of research found that study outcomes were significantly different in privately funded versus publicly-funded drug studies (1, 2).


Beginning in the mid-1980s, a group of new studies tested the hypothesis that the source of funding can be correlated with the outcome of drug safety and efficacy research.  Badil Als-Nielsen et al. tested the hypothesis that industry-sponsored drug trials tend to draw pro-industry conclusions. The authors found that “conclusions were significantly more likely to recommend the experimental drug as treatment of choice in trials funded by for-profit organizations alone compared with trials funded by non-profit organizations" (3, p. 924). The authors ruled out as an explanation of industry favored outcomes both the magnitude of the treatment effect and the occurrence of adverse events reported.  They also noted that “trials funded by for-profit organizations had better methodological quality than trials funded by nonprofit organizations regarding allocation concealment and double blinding” (3, p. 925). The effects they observed between funding and outcome occurred whether the sponsor’s contribution was minimal (provided the drug) or maximal (funded the study).  Bias can enter into any or all the stages of a study: the methodology, execution of the study, interpretation of results, and recommendations (whether the experimental drug is better than the existing drug). It is also possible that industry-funded studies, having been identified as being of higher quality, have gone through more internal (company-sponsored) study and analyses, than one would expect of a non-profit organization.


John Yaphe et al. selected for their study randomized controlled trials (RCTs) published between 1992-1994 of drugs or food products with therapeutic properties.  Of the 209 industry funded studies, 181 (87%) and 28 (13%) had positive and negative findings respectively, while of 96 non-industry funded studies, 62 (65%) and 34 (35%) had positive and negative findings, respectively.  The authors note that “the higher frequency of good outcomes in industry supported trials may stem from a decision to fund the testing of drugs at a more advanced stage of development” (4, p. 567).  Also the methodologies of these trials may differ since comparison of new drugs with a placebo may be more prevalent among industry financed studies compared to non-industry financed studies.

 

Paula Rochon et al. investigated the relationship between reported drug performance and manufacturer association. All the trials in their study had a “manufacturer association,” because they reported there was a scarcity of non-manufacturer-associated trials.  Therefore, they could not compare trials funded/supported by private companies with those funded/supported by non-profit organizations.  The results of the study showed the “the manufacturer-associated drug is always reported as being either superior to or comparable with the comparison drug” and that “these claims of superiority, especially with regard to side-effect profiles, are often not supported by trial data” (5, p. 158). Of course, it is logically possible that head to head testing of new versus old drugs always shows the new drug superior.

 

Marcia Angel explains the process with an illustration from statins—drugs that lower blood cholesterol levels.  “There is little reason to think one is any better than another at comparable doses.  But to get a toehold in the market, me-too statins were sometimes tested for slightly different outcomes in slightly different kinds of patients, and then promoted as especially for those uses."(6, p. 81)  Also, some tests use different doses of the new drugs and compare them to lower doses of the old drugs.  This is corroborated by Rochon et al in their study. “In the majority of cases in which the doses were not equivalent, the drug given at the higher dose was that of the supporting manufacturer” (5, p. 161).  The authors surmise that higher doses “bias the study results on efficacy in favor of the manufacturer-associated drug” (5, p. 161).  This illustrates that bias may enter into the “funding effect” in subtle and complex ways that deal with how the trial is organized.

 

Friedman et al. investigated whether sources of funding could be correlated to reported findings.  The authors observed a “strong association between positive results and COI among all treatment studies” (7, p. 53).  Another interesting finding is that the probability of reported negative results where an author had a COI was very low, in other words, the authors conclude “the odds are extremely small that negative results would be published by authors with COI” (7, p.53). For social scientists studying the funding effect, the issue is less a question of bias in the reported studies than it is an issue of bias in a failure in reporting negative studies, that is, in not having the complete scientific record.

 

Not all studies testing a hypothesis that there is an association between trial outcome or study quality and funding source reached positive findings.  Tammy Clifford et al. did not find a statistically significant association between reporting quality and funding source or between reporting quality and trial outcome (8).  Perhaps one conclusion can be drawn: Of the 100 trials 66% were funded in whole or in part by industry and 67%  favored the new therapy.  Thus, it appears that industry trials are dominant and driving the advocacy of new drugs over old treatments even without adding author conflict of interest.

 

The studies of funding effects in pharmaceutical products include many types of drugs in order to develop aggregate statistics.  By eliminating product variability, investigators of the funding effect can more precisely judge the possible linkage between the source of funding and outcome findings such as product quality, safety, or economic efficiency. Deborah Barnes and Lisa Bero investigated whether review articles on the health effects of passive smoking reached conclusions that are correlated with the authors’ affiliations with the tobacco companies (9).  Since tobacco is a relatively homogenous product, differences in outcome cannot be attributed to product variability or company pre-testing.  The authors found that 94% (29/31) of reviews by tobacco-industry affiliated authors concluded that passive smoking is not harmful compared with 13% (10/75) of reviews without tobacco-industry affiliations. The influence of tobacco-industry affiliation on the finding of “safety of passive smoking” was very strong.  “The odds that a review article with tobacco industry-affiliated authors would conclude that passive smoking is not harmful were 88.4 times higher than the odds for a review article with non-tobacco affiliated authors, when controlling for article quality, peer review status, article topic, and year of publication” (9, p. 1569). The authors reported that the “only factor that predicted a review article’s conclusion was whether its author was affiliated with the tobacco industry” (9, p. 1570).

 

This analytical review of studies that investigate an association between funding source and study conclusions has revealed several important results.  First, there is sufficient evidence in drug efficacy and safety studies to conclude that the funding effect is real. Industry-sponsored trials are more likely than trials sponsored by non-profit organizations, including government agencies, to yield results that are consistent with the sponsor’s commercial interests.  Second, there is some circumstantial evidence that this effect arises from two possible causes. Either the drugs sponsored by industry have gone through more internal testing and less effective drugs are screened out, or the methods used in industry-sponsored drug testing have a structural bias that is more likely to yield positive outcomes.

 

What I have shown in this paper is that the funding effect, namely the result of a correlation between research outcome and funding source, is not definitive evidence of bias, but is prima facie evidence that bias may exist. Additional analyses of the methodology of the studies, interpretation of the data, and comparison of the products studied can resolve whether the existence of a funding effect is driven by scientific bias.

References

 

1.         Krimsky, S. (2006). Publication bias, data ownership, and the funding effect in science: Threats to the integrity of biomedical research. In: Wagner, W., Steinzor, R., eds. Rescuing science from politics: Regulation and the distortion of scientific research. Cambridge: University Press.

2.         Krimsky, S. (2010). Combating the funding effect in science: What's beyond transparency? Stanford Law & Policy Review,XXI, 101-123.

3.         Als-Nielsen, B., Chen, W., Gluud, C., & Kjaergard, L. L. (2003). Association of funding and conclusions in randomized drug trials: A reflection of treatment effect or adverse events? JAMA,290, 921-928.

4.         Yaphe, J., Edman, R., Knishkowy, B., & Herman, J. (2001). The association between funding by commercial interests and study outcome in randomized controlled drug trials. Fam Pract,18, 565-568.

5.         Rochon, P. A., Gurwitz, J. H., Simms, R. W., Fortin, P. R., Felson, D. T., Minaker, K. L., et al. (1994). A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med,154, 157-163.

6.         Angel, M. (2004). The Truth About the Drug Companies. New York: Random House.

7.         Friedman, L. S., & Richter, E. D. (2004). Relationship between conflict of interest and research results. Journal of General Internal Medicine,19, 54-56.

8.         Clifford, T. J., Barrowman, N. J., & Moher, D. (2002). Funding source, trial outcome and reporting quality: are they related? Results of a pilot study. BMC Health Serv Res,2, 18.

9.         Barnes, D. E., & Bero, L. A. (1998). Why review articles on the health effects of passive smoking reach different conclusions. JAMA,279, 1566-1570.