Miloš Fišar: Transparency in publishing increases the credibility of research

24 Jun 2024 Jana Sosnová

Miloš Fišar in the experimental laboratory at Faculty of Economics and Administration MU | Photo: Jitka Janů

Dr. Miloš Fišar with his colleagues from the universities in Vienna, Paris and Dallas confirmed that the application of open science principles makes sense. In collaboration with more than seven hundred volunteers, they verified the validity of the results of nearly 500 studies. In May Dr. Fišar received the Masaryk University Rector's Award for Outstanding Research Achievements for Young Scientists under 40 for his research.

Could you explain what reproducibility of research means?

Research is reproducible if the results can be duplicated using the same methods and procedures. This means that the original data, methods and analytical procedures will be repeated by another researcher to see if they achieve the same results. For example, if a researcher conducts an economic experiment on contributions to public goods and finds that women are more altruistic than men, then anyone who looks at the same data using the same methods must come to the same results.

Another important concept is the replicability, which aims to check whether the findings are consistent across different contexts and whether they are influenced by the specific conditions of the original research. Using the previous example of an experiment on contributions to public goods, a replication is if someone conducts your experiment with a different group of participants and comes to the same conclusions, i.e. that women are more altruistic than men in contributions to public goods.

Reproducibility and replicability contribute to the credibility and reliability of scientific research. If a study is both reproducible and replicable, it means that the results are not just a one-off thing or a mistake and can be used to inform further research, public policy or action.

Why is especially reproducibility so important?

On the one hand, it serves to verify published results, thus preventing errors in data processing and analysis, and on the other hand, it contributes to ensuring the scientific and moral integrity of researchers. It is also a support for further follow-up research and can contribute to greater public confidence in science as a whole.

What specifically did you look at in the study?

We verified the reproducibility of almost 500 articles published in the Management Science journal, which introduced the policy to publish replication packages on 1 June 2019. These contain the data and analytical code used by the authors of the published studies. In our analysis we focused specifically on articles published before and after the introduction of this policy.

Our research has shown that the introduction of the code and data disclosure policy has significantly improved the reproducibility of research results despite limitations due to the unavailability of some data, for example due to proprietary rights or subscriptions. We showed that 95% of the articles for which data were available were fully or largely reproducible. If we also include articles with incomplete data, the overall reproducibility rate after the introduction of the new policy is about 68%. In the period before the policy was introduced, the reproducibility rate was only 55%.

The study was not based on a request from the journal but you did it on your own initiative. What motivated you to do this?

Together with my colleagues Ben Greiner and Christoph Huber from the Vienna University of Economics and Business, Elena Katok from the University of Texas at Dallas and Ali Ozkes from SKEMA Business School, we have been working as Data&Code editors for the Management Science journal since 2020. This means that we verify that the data and analysis codes provided in the replication packages are complete. Over the last four years, we have verified more than 1000 packages in this way. Unfortunately, with such a large number of studies, it is not within our power to verify the reproducibility of individual studies. We saw an opportunity to look at reproducibility and to check whether the introduction of the policy had any relevance at all. We presented our idea to the Editor-in-Chief, who agreed to it on the condition that the potential study would go through a full peer review process like any other journal article.

What role did volunteers play in your study?

The study is an example of crowdsourcing research, where a total of 733 volunteers participated in the implementation of the entire project. Each of them reviewed up to two replication packages. Without the volunteers, to whom I owe a huge “thank you”, this study would never have happened. Researchers at various stages of their careers were involved, collectively devoting more than 6,500 hours of their time to verifying the reproducibility of the studies. As a result, the study covered a large number of articles and its conclusions are therefore sufficiently robust.

Do you see this "community effort" as important for the future of science?

Yes, I think crowdsourcing is important. It is particularly in the social sciences that in recent years we have been talking about the so-called replication crisis, where the verification of conclusions of many influential studies fails. By involving a wide range of researchers and using a common methodology, it is possible to systematically replicate or reproduce the studies already published, thereby increasing the integrity of research.

Can the principle of reproducibility be applied equally in quantitative and qualitative research?

The principle of reproducibility is key to both types of research. Reproducibility is mainly aimed at sharing the complete material used in a particular article, project or more generally in research. In quantitative research, which was also the subject of our study, this involves for example sharing data, analytical code or instructions for experiments. On the other hand, qualitative research, which is often based more on the analysis of text or visual data, is usually about increasing the transparency of the entire analysis process and providing comprehensive documentation of research implementation and procedures.

What steps can be taken to increase the credibility of research in the social sciences and specifically in economics?

In my opinion, the key to increasing the credibility of research is to emphasise transparency and openness of science from the very beginning, or even before any research begins. The first step should be the preregistration of the research design, including the definition of hypotheses and methods, and a preliminary plan for data analysis. It should be stored in a data repository and it should be clear what and how the researcher plans to investigate in his or her study. It is also important to document the data collection itself and then share the data collected openly, so that anyone can verify the conclusions of the research published. I also see the open publication of studies as important, so that the general public can get to know the research studies, since they are often publicly funded. The popularisation of science, although we as scientists are often rather reticent about popularisation, should be a complementary tool.

Miloš Fišar is an experimental economist working at the Department of Public Economics and the MUEEL laboratory at Masaryk University. He also holds the position of Code & Data Associate Editor at the renowned Management Science journal. Previously, he worked at WU Vienna. His research focuses on behavioural and experimental economics, particularly on topics related to social preferences.

No description

More articles

All articles

You are running an old browser version. We recommend updating your browser to its latest version.