"

14

What is Open Science?

Open Science refers to a movement that aims to make scientific research, data, and publications freely accessible to everyone, promoting transparency, collaboration, and reproducibility to increase the accessibility and impact of scientific knowledge.

Why do we need Open Science?

First, making research studies and related information freely accessible is nice. For example, having this open-source textbook for your class probably saved you a lot of money. With the same mindset, making research findings freely accessible to everyone will allow everyone to benefit from the research study.

Another reason for the need of Open Science has something to do with the poor research practices that have been conducted in the past. A previous study aimed to replicate all the research findings conducted in some of the most prominent journals in Psychology. As listed in the table below, only 36% of the findings published in those top-tier journals were replicated. This is concerning because it could mean that the research findings that we trusted and believed in actually do not exist in real life.

Journal % Findings Replicated
Journal of Personality and Social Psychology: Social 23
Journal of Experimental Psychology: Learning, Memory, and Cognition 48
Psychological Science, social articles 29
Psychological Science, cognitive articles 53
Overall 36

Table 3.8.1. Percentage of research findings from journals replicated

Some of the reason behind this lack of replicability are due to questionable research practices including:

Low Statistical Power:
Low statistical power occurs when a study includes too small a sample size to detect meaningful effects. For example, imagine a study that tracks the intelligence of 10 individuals from age 20 to 50 and concludes that people become more intelligent as they age. With only 10 participants, the sample is too small to draw reliable conclusions, and there’s a high chance that the result is due to random variation or a special case rather than a true trend. As a result, the findings cannot be generalized to the broader population.

P-Hacking:
P-hacking refers to manipulating data analysis in order to obtain statistically significant results, even when there is no real effect. This can involve practices like running multiple statistical tests and only reporting the ones that yield significant results. Why is this problematic? Imagine each test has a 5% chance of producing a false positive result. If you run two tests, there’s about a 10% chance of finding at least one false positive. If you run 20 tests, the likelihood of finding at least one significant result by chance alone becomes very high—even if no real effect exists. Unfortunately, it is not uncommon for researchers to conduct many tests but only report the significant ones, which misleads readers and distorts scientific understanding.

HARKing (Hypothesizing After Results are Known):
HARKing occurs when researchers formulate or change their hypothesis after seeing the results, and then present it as if it were their original prediction. For instance, suppose you initially hypothesize that eating ice cream before a test will improve performance. After collecting data, you find the opposite result—that performance worsens. Instead of reporting the original hypothesis, you revise it to say that you predicted the decline in performance. This practice is misleading, as it presents post-hoc reasoning as if it were an a priori prediction, giving a false impression of scientific foresight and increasing the risk of false conclusions.

Falsification of Results:
This refers to the deliberate fabrication or alteration of data to produce a desired outcome. It is one of the most serious violations of scientific integrity and can have wide-reaching consequences for public trust and future research.

Publication Bias:
Publication bias occurs when studies with significant results are more likely to be published than those with null or non-significant findings. This skews the body of published research and contributes to questionable practices like p-hacking, HARKing, and even falsification. For example, imagine 11 studies are conducted to examine whether eating ice cream before a test improves performance. Ten of these studies find no significant effect, while one does. Due to publication bias, only the one “positive” study gets published. As a result, readers—including the public and other researchers—may mistakenly believe there is strong evidence supporting the effect, when in fact the overall evidence suggests otherwise.

Toolboxes for Open Science

As defined above, Open Science refers to the movement aimed at making research studies and their findings openly accessible to everyone. There are several ways researchers can contribute to this effort and increase the transparency and accessibility of their work.

One approach is to share research materials, analysis code, and datasets publicly. This not only fosters collaboration but also makes it easier for other researchers to replicate and verify findings. Writing detailed methods sections is another important practice, as it ensures that others can accurately understand and reproduce the procedures used in the study.

Researchers can also engage in preregistration or submit a preregistered report. In both cases, the research plan—including hypotheses, study design, and analysis strategies—is documented and submitted before data collection begins. This helps prevent questionable practices like p-hacking and HARKing by holding researchers accountable to their original research questions and plans.

Finally, practices such as publishing preprints (early versions of papers shared before peer review) and choosing open access publishing options allow research findings to be shared freely with the broader community, without financial or technical barriers. These practices make research more inclusive and ensure that scientific knowledge is accessible to students, educators, practitioners, and the general public.