Creating an evidence base requires good research, but how can we know if evidence is strong or weak … or even misleading? The process by which researchers conduct, document, and share their work is essential to winnowing out weak studies and to improving, honing and disseminating strong ones. At the risk of taking the metaphor too far – can we make research so transparent that anyone can see right through it?
Recently, 23 people from the world of applied policy research joined us at CGD, along with our co-sponsor BITSS, to discuss this problem. In particular, we discussed proposals to improve the quality of evidence by making research – and the process of research – more transparent. We hosted the meeting to help us decide whether CGD should do some concerted work in this area and are hoping you will also send us your thoughts by commenting below.
Some ideas discussed at our roundtable and also previously at a BITSS Research Transparency Forum included such things as registering research plans; making data publicly available; and replicating studies.
Each of these ideas takes the conduct of research out of a laptop and into the public domain. In this way, mistakes might be found before errors end up influencing macroeconomic policies or discouraging people from jogging. Journals might pay attention to problems before deciding to publish. In fact, the discipline of publicly registering hypotheses and organizing data so that it is easy to post online helps researchers be more systematic and careful in conducting their studies. Furthermore, our new world of social media and crowdsourcing provides an opportunity that researchers lacked before – the opportunity to improve the scientific process of inquiry by accelerating the pace of review and feedback.
But can we go too far? At the roundtable, people noted that exclusive use of data is one of the things that motivates researchers to collect new data. Openness of data, if not handled properly, can also violate respondents’ privacy. Replication studies are great when they’re done well and constructively; but problematic if they’re done poorly or to “score points.” Registering analysis plans can get in the way of learning if it means being inflexible even in the face of changing events.
Despite the qualms, participants agreed that we should move toward greater openness. The main question was how? There’s a role for journals in improving peer review, requiring data be made available online upon publication, and agreeing to publish replication studies. Similarly, organizations that fund research – whether governments or private foundations – can give preference to proposals that agree to adhere to transparent research standards. We may need to set up third-party organizations to curate data and maintain registries. And of course, researchers and universities can adopt new norms and expectations about research, incorporate it in their graduate training courses, and explicitly value transparency in reviewing personnel.
So what do you think? If CGD chose to do more work on research transparency, where is the biggest impact? Should we try to convince funding agencies to require data sharing? Focus on open access to publications? Identify appropriate roles for third-parties? Argue for governments to share surveys? Detail the techniques for curating data?
Your turn: Please tell us what your top priority would be.