Written by Matthew Eland
Matthew Eland is an Editorial Assistant at Alpharmaxim Healthcare Communications
With governments around the world desperate to stop the coronavirus pandemic, there’s never been more scrutiny on the research and clinical community. Issues such as the retraction of papers from the Lancet and New England Journal of Medicine have highlighted existing problems in the scientific process, and anti-science voices are ready to capitalise on any mistakes.
With so much at stake – not only for the millions of people waiting for vaccines but also for the reputation of science more generally – it’s crucial that the integrity of research standards is upheld. As of November 2020, there were almost 3000 ongoing and completed clinical trials listed on the World Health Organization’s International Clinical Trials Registry Platform.1
With such a large volume of studies underway, what lessons can we learn to protect the integrity of clinical trials in the future?
The importance of data sharing
There’s long been an awareness that more needs to be done to improve the sharing of research data in public health. In fields such as genetics and physics, data sharing is more widely accepted.2
In medical disciplines, it’s less commonplace. Journal publishers have sought to address this need, and increase transparency and collaboration, through schemes such as Open Access and Dryad. Meanwhile, since the outbreak of COVID-19, some journals have called for registered reports and enabled free access for research on the virus.
There is evidence to suggest that data sharing is highly beneficial. One study found that, over the long run, writers who share their data are eventually rewarded with extra citations.3 It can also avoid situations such as those faced by two major journals earlier in 2020.
In May, the Lancet published a study that associated the use of hydroxychloroquine with decreased in-hospital survival and an increased frequency of ventricular arrhythmias.4 Concerns were then raised by researchers in Australia, who noticed inconsistencies in the data provided by Surgisphere, a US data analytics company.5 It transpired that none of the co-authors had seen the data. The Lancet paper has since been retracted,6 as has a paper from the same authors, using the same data set, in the NEJM. The Lancet has now changed its editorial policies, to ensure that ‘more than one author’ will be obligated to access and verify any data associated with the manuscript and that editors will take data-sharing statements into account when making editorial decisions.7
In one sense, the incident proves that the system works: the research community noticed the inconsistencies and banded together to investigate false or misleading information. But the problem remains that this will be used by those with vested interests to illustrate what they perceive as flaws in the scientific process.
It’s essential, therefore, that steps are taken to ensure that data sharing in healthcare studies is normalised. The amendment of editorial policies is a start.
In the BMJ, Naci et al. recommend that all trial sponsors insist on sharing data rapidly through recognised platforms (such as www.iddo.org) and that their transparency and data sharing practices need to be monitored. They also note that academic institutions should see data sharing as a key requirement when considering promotions and tenure.8 Industry also has a part to play; a 2019 study found that only 25% of large pharmaceutical companies met the criteria set out by an adapted data sharing measure, built from the guidelines of 10 expert bodies.9
Data sharing allows researchers to make connections that others might not have spotted. With so many trials being conducted, it’s important that data are available to be analysed in subsequent meta-analyses, in order to judge comparative effectiveness. Naci et al. also advocate for ‘extensive co-ordination’ between researchers, assessors and producers of guidance to ensure that the evidence is translated into trustworthy guidelines. They recommend a number of measures to facilitate this, including minimising duplication across multiple groups and developing a consortium to ‘…prospectively design meta-analyses in collaboration with trialists to ensure timely availability of results’.8 In short: communication across all groups will be essential.
The pitfalls of hype
Naci et al. also note that ‘the research agenda seems to be partly driven by hype and anecdote rather than informativeness and social value’. This is partly due to intense public interest; each development in the hunt for a workable vaccine – including reports of adverse events in Phase 3 trials10 – is potentially front page news!
Such unprecedented scrutiny and its associated hype mean that researchers and organisations should be wary of making bold claims in case they’re not borne out by the evidence, as this could erode public trust. In an interview with the New Scientist, psychologist Stuart Ritchie – whose recent book examines how issues such as fraud, p-hacking, unforced errors and hype have led to a ‘reproducibility crisis’ – says that science is ‘incremental and small scale and requires a new kind of intellectual humility’. He warns against treating it as ‘an endless march of exciting, flashy findings’.11 We should instead view it as a steady accumulation of evidence and knowledge, rather than as a series of headline-grabbing press releases.
This would avoid pitfalls such as that faced by researchers at the University of Oxford working on the RECOVERY trial. After they discovered that dexamethasone reduced deaths in patients with COVID-19 on ventilators, it was announced in a press release, rather than in a peer-reviewed journal with the relevant data.12
Martin Landray, deputy chief investigator of RECOVERY, defended the announcement, claiming it was in the public interest to make the announcement as soon as possible. He cited the ‘clear benefit’ for patients on ventilators, emphasising the high statistical significance and the fact that data were recorded as part of a prespecified analysis (one of the recommendations for an ideal trial proposed by Naci et al.).13
Nevertheless, some scientists are worried about this febrile atmosphere, especially so soon after the Lancet/NEJM retractions. Naci et al. note that after a controversial, uncontrolled study evaluating the efficiency of antimalarial drugs, a ‘disproportionately large number of studies’ were launched to corroborate the findings, skewing the research agenda.14
This pressure to act, to produce headlines without doing due diligence – even when the situation, a global pandemic, is right at the top of the news feed – should not lead to a drop in standards. A rush to publish will lead to retractions and could ultimately do more harm than good.
We should expect scientists to be much more open, but also more boring
Ultimately, these are the main take-away points during this serious, large-scale public health emergency. All advancements should be data driven and done with the proper due diligence. While there’s a temptation to shout each piece of positive news from the rooftops, we must not neglect the proper checks and balances. It’s for these reasons that the maxim ‘be more boring’ should be one for our time.
1. WHO. World Health Organization’s International Clinical Trials Registry Platform (WHO ICTRP). https://clinicaltrials.gov/ct2/who_table. Accessed 24 November 2020
2. Wellcome. Sharing research data to improve public health: full joint statement by funders of health research. https://wellcome.org/what-we-do/ourwork/sharing-research-data-improve-public-health-full-joint-statement-funders-health. Accessed 24 November 2020
3. Christensen G, Dafoe A, Miguel E, et al. A study of the impact of data sharing on article citations using journal policies as a natural experiment. PloS One 2019;14(12):e0225883
4. Mehra MR, Desai SS, Ruschitzka F, et al. RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet 2020;S0140-6736(20)31180-6
5. The Guardian. Questions raised over hydroxychloroquine study which caused WHO to halt trials for Covid-19. 2020. https://www.theguardian.com/science/2020/may/28/questions-raised-over-hydroxychloroquine-study-which-caused-who-to-halt-trials-for-covid-19. Accessed 24 November 2020
6. Mehra MR, Ruschitzka F, Patel AN. Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet (London, England) 2020;395(10240):1820
7. The Editors Of The Lancet G. Learning from a retraction. Lancet 2020;396(10257):1056
8. Naci H, Kesselheim AS, Røttingen JA, et al. Producing and using timely comparative evidence on drugs: lessons from clinical trials for covid-19. BMJ 2020;371:m3869
9. Miller J, Ross JS, Wilenzick M, et al. Sharing of clinical trial data and results reporting practices among large pharmaceutical companies: cross sectional descriptive study and pilot of a tool to improve company practices. BMJ 2019;366:l4217
10. BBC. Coronavirus: Oxford University vaccine trial paused after participant falls ill. 2020. https://www.bbc.co.uk/news/world-54082192. Accessed 24 November 2020
11. New Scientist. Stuart Ritchie interview: A deep rot is turning science into fiction 2020. https://www.newscientist.com/article/mg24732961-100-stuartritchie-interview-a-deep-rot-is-turning-science-into-fiction/. Accessed 24 November 2020
12. Mahase E. Covid-19: Low dose steroid cuts death in ventilated patients by one third, trial finds. BMJ 2020;369:m2422
13. Wise J, Coombes R. Covid-19: the inside story of the RECOVERY trial. BMJ 2020;370:m2670
14. Gautret P, Lagier JC, Parola P, et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. Int J Antimicrob Agents 2020;56(1):105949