by Winifred S. Hayes, PhD, President and CEO, Hayes, Inc.
In my previous post, “Is industry-funded research biased?” I mentioned a 2010 manuscript published in the Journal of the American Medical Association (JAMA. 2010;303(20):2058-2064). Today, I’d like to look more closely at this analysis and talk about why its results are concerning.
This article, titled “’Spin’ of statistical non-significance” provides an analysis of randomized controlled trials (RCTs) published in December 2006 that reported statistically nonsignificant results for the primary outcome. The authors wanted to determine the level of “spin” or distortion of nonsignificant results. Of 616 articles identified, 72 of them met inclusion criteria and were included in the analysis.
What types of spin were the investigators looking for? Spin techniques such as downplaying nonsignificant results for the primary outcome while focusing on statistically significant results found through other analyses, such as within-group comparisons, secondary outcomes, or subgroup analyses are some common types of spin found in publications.
The results are interesting. Here are some of the highlights:
- The title was reported with spin in 13 (18%) articles.
- Spin was identified in 27 (37.5%) articles’ Results sections of the abstract.
- In the abstract, spin was identified in 42 (58.3%) articles’ Conclusions sections and 17 (23.6%) focused only on treatment effectiveness.
- Spin was identified in the main text of the results section of 21 (29.2%), the discussion section of 31 (43.1%), and the conclusion section of 36 (50%) articles.
- More than 40% of the articles had spin in at least two areas of the paper.
Keep in mind that these were all RCTs, considered to be the gold standard of evidence. Unfortunately, as the authors of this analyses concluded, “In this representative sample of RCTs published in 2006 with statistically non-significant primary outcomes, the reporting and interpretation of findings was frequently inconsistent with the results.”
Even though the results of these clinical trials did not show a statistically significant difference in the primary outcome for the treatments under investigation compared with the controls, the results were spun in such a way to downplay the fact that the outcomes were essentially equivalent. Were the results spun in a favorable way because of financial relationships or obligations to the research sponsor? It’s impossible to answer that question without intimate knowledge about how these publications were prepared. But it is cause for concern.
Pharmaceutical and medical device manufacturers know how to market. And because these entities often position their products as “latest and greatest,” they often gain mass adoption well before the clinical evidence supports it. As we’ve seen from this analysis, spin happens. And unless we are aware of it—and on the lookout for it—it can affect our uptake of new health technologies.
How often do we adopt new technologies because the results have been misinterpreted or spun to look better than they actually are? What can we do to prevent distortion and misinterpretation of trial results?