![]() Finally, we’ll examine how author feedback has informed our efforts to improve ScreenIT, examine some tools in development and share plans for future meta-research using ScreenIT. We’ll also share results and lessons learned, including common reporting issues identified in COVID-19 preprints and author responses to the reports. We’ll provide an overview of the rationale for using automated screening, the strengths and limitations of this approach, and the ScreenIT pipeline structure. Public reports are posted in hypothes.is and tweeted out via This session will explore the use of automated screening to raise awareness about common reporting problems and help authors to improve their manuscripts. During the pandemic, we’ve used ScreenIT to automatically screen more that 17,000 bioRxiv and medRxiv COVID-19 preprints. The tools use text mining, natural language processing, and computer vision algorithms. ScreenIT includes automated tools that check scientific papers for limitations sections, reporting of participant’s sex, blinding randomization, power calculations, ethics statements, retracted citations, common data visualization problems and other factors. ![]() ![]() Members of the ‘Automated Screening Working Group’ have worked to address this problem by combining many screening tools into a single pipeline, called ScreenIT. One major barrier to improving reporting is the lack of an efficient way to provide authors with feedback. Suboptimal reporting practices are widespread in preprints and published papers. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |