Fashion Magazine

Early Research on COVID-19 is Littered with Poor Methods and Low-quality Results — a Problem for Science That Has Exacerbated but Not Caused the Pandemic

By Elliefrost @adikt_blog

Early in the COVID-19 pandemic, researchers flooded journals with studies on the then-novel coronavirus. Many publications have streamlined the peer review process for COVID-19 papers, while acceptance rates remained relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research from a very large amount of rapidly disseminated information.

However, in my review of 74 COVID-19 articles published in 2020 in the top 15 generalist public health journals in Google Scholar, I found that many of these studies used poor quality methods. Several other reviews of studies published in medical journals have also found that much early COVID-19 research used poor research methods.

Some of these articles have been cited many times. For example, the most cited public health publication on Google Scholar used data from a sample of 1,120 people, mostly well-educated young women, most of whom were recruited via social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers conducted more than 500 analyzes of the data, many of the statistically significant results are likely chance events. However, this study has been cited more than 11,000 times.

A highly cited article means that many people have mentioned it in their own work. But a high number of citations is not strongly related to the quality of research, because researchers and journals can game and manipulate these statistics. Frequently citing low-quality research increases the likelihood that poor evidence will be used to inform policy, further eroding public trust in science.

Methodology is important

I am a public health researcher with a long-standing interest in research quality and integrity. This interest lies in the belief that science has helped solve important social and public health problems. Unlike the anti-science movement that spreads disinformation about such successful public health measures as vaccines, I believe that rational criticism is fundamental to science.

The quality and integrity of research largely depends on its methods. Each type of research design must have certain characteristics to provide valid and useful information.

For example, researchers have known for decades that studies of the effectiveness of an intervention require a control group to know whether any observed effects can be attributed to the intervention.

Systematic reviews that bring together data from existing studies should describe how investigators identified which studies to include, assessed their quality, extracted the data, and preregistered their protocols. These features are necessary to ensure that the review covers all available evidence and makes it clear to the reader what is worth paying attention to and what is not.

Certain types of studies, such as one-time surveys of convenience samples that are not representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether a variable caused a particular outcome.

<iframe title="Conducting a Systematic Literature Review" width="900" height="506" src="https://www.youtube.com/embed/WUErib-fXV0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen"></iframe>

All research designs include standards that researchers can refer to. But compliance with standards slows down research. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing each study on a topic takes more time than superficially reviewing some studies. Representative samples are more difficult to generate than convenience samples, and collecting data at two points in time is more work than collecting data all at once.

Studies comparing COVID-19 articles to non-COVID-19 articles published in the same journals found that COVID-19 articles tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID -19 articles. COVID-19 papers rarely include predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant that there were no safeguards against dredging the data to find ‘statistically significant’ results that could be selectively reported.

Such methodological issues were likely overlooked during the significantly shortened peer review process for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 articles on COVID-19 at 13 days, compared to 110 days in the 539 pre-pandemic articles from the same journals. In my research, I found that two online journals that published a very large number of methodologically weak COVID-19 articles had a peer review process of about three weeks.

Publish or perish culture

These quality control issues were already present before the COVID-19 pandemic. The pandemic has simply sent them into overdrive.

Journals tend to favor positive, “new” findings: that is, results that demonstrate a statistical relationship between variables and supposedly identify something previously unknown. Because the pandemic was in many ways new, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented, and how the disease could be treated.

Veel onderzoekers voelen de druk om artikelen te publiceren om hun carrière vooruit te helpen.  <a href=South_agency/E+ via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/rEXM5db075wdJoWesKTYyA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MA–/https://media.zenfs.com/en/the_conversation_us_articles_815/33af65ff633d7fc 9dcd70f8a159c9537″/>

Academics have operated for decades in a “public or perish” incentive system, where the number of articles they publish is part of the metrics used to evaluate employment, promotion, and tenure. The flood of mixed-quality COVID-19 information provided an opportunity to increase publication numbers and citation metrics as journals sought and quickly reviewed COVID-19 articles, which were more likely to be cited than non-COVID articles.

Online publishing has also contributed to the deterioration of research quality. Traditional academic publishing was limited in the amount of articles it could generate because journals were packaged in a printed, physical document that was usually produced only once a month. In contrast, some of today’s online mega-journals publish thousands of articles per month. Low-quality studies that are rejected by reputable journals can still find a publisher willing to publish them for a fee.

Healthy criticism

Criticizing the quality of published research carries risks. It can be misinterpreted as adding fuel to the raging fire of anti-science. My answer is that a critical and rational approach to the production of knowledge is in fact fundamental to science itself and to the functioning of an open society capable of solving complex problems such as a global pandemic.

Publishing a large amount of misinformation disguised as science during a pandemic obscures true and useful knowledge. At worst, this could lead to poor public health practices and policies.

Science done right provides information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This means that we look critically at the quality of the design, statistical methods, reproducibility and transparency of a study, and not at the number of times the study has been cited or tweeted about.

Science depends on a slow, thoughtful, and rigorous approach to collecting, analyzing, and presenting data, especially if it wants to provide information for effective public health policy. Similarly, thoughtful and rigorous peer review is unlikely for papers that appear in print only three weeks after they are first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.

Rigoureuze wetenschap vereist zorgvuldige afweging en aandacht, geen haast.  <a href=Montage/stone via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/AuDX1plnz1dFOUknNczTJw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY3Nw–/https://media.zenfs.com/en/the_conversation_us_articles_815/193c0f7837 8f4eaeb646d09395f78125″/>

Public health relies heavily on disciplines experiencing replication crises, such as psychology, biomedical sciences, and biology. It is similar to these disciplines in terms of incentive structure, research designs and analytical methods, and inattention to transparent methods and replication. Much public health research on COVID-19 shows that the country is suffering from similarly poor quality methods.

By reexamining how the discipline rewards its scientists and assesses their science, it can better prepare for the next public health crisis.

This article is republished from The Conversation, an independent nonprofit organization providing facts and analysis to help you understand our complex world.

It was written by: Dennis M. Gorman, Texas A&M University.

Read more:

Dennis M. Gorman does not work for, consult with, own stock in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


Back to Featured Articles on Logo Paperblog