Hanne K. Sjølie

Normative bias in research

Sjølie H. K. (2024). Normative bias in research. Silva Fennica vol. 58 no. 3 article id 24034. https://doi.org/10.14214/sf.24034

Author Info
  • Sjølie, Inland Norway University of Applied Sciences, Faculty of Applied Ecology, Agricultural Sciences and Biotechnology, Department of Forestry and Wildlife Management, Campus Evenstad, Postboks 400, 2418 Elverum, Norway E-mail sjoliehannek3@gmail.com

Received 31 May 2024 Accepted 31 May 2024 Published 3 June 2024

Views 2955

Available at https://doi.org/10.14214/sf.24034 | Download PDF

Creative Commons License CC BY-SA 4.0 full-model-article24034

Fraser et al. (2018) published a paper on the prevalence of questionable research practices (QRPs) within ecological and evolutionary sciences. These includes practices such as HARKing (Hypothesizing After Results are Known), p-hacking and cherry-picking of results. In HARKing, the researchers present unexpected findings that have appeared as part of the research process as if it was part of the research from the beginning. P-hacking is about using various methods that all share the aim of maximizing significant results. Cherry-picking refers to selecting results that are to be presented, based on their significance. (Fraser et al. 2018) found through a survey of 807 researchers that 42%–64% had used these practices.

These practices may be steered by, among other things, the scientists’ normative biases. Normative biases may also impact which literature that is cited, color the discussion, conclusions, and dissemination of findings to peers and to the public. This spring, we published a literature review in this Journal (Tange et al. 2024) that left me pondering about normative biases. The paper synthesized impacts of timber harvesting on biodiversity, where harvests were carried out with measures commonly taken in boreal forests for conserving biodiversity. Two thirds out of the 178 results showed no negative effects of harvesting with these measures on biodiversity compared to not harvesting.

This result was a surprise to us, but so was the way results were presented and discussed in some of the papers with unexpected findings. Unexpected findings mean here non-significant or even positive effects, thus that stands harvested with conservation measures had higher biodiversity than unharvested stands, in opposition to the hypotheses. Such findings, even in cases where they were significant and constituted the paper’s main results, were in cases dampened or veiled in conclusions and abstract. Often, the lack of clearly stated hypotheses or conclusions on the hypotheses made the results hard to detect.

We found a different tone in the discussion based on the results. Unexpected findings were subject to considerable rationalization, where various possible explanations drawn up, and as the reader, I was in some cases left with the feeling that the authors did not really believe in their own findings.

Thinking critically about our own practice, how do we react to results that are in the opposition to our expectations? Who could say with honesty that they in every paper have analyzed and reanalyzed expected results with the same dedicated scrutiny as non-expected results? Are we not just too happy and relieved when we have tidy and well-behaved results, ready to form a core of a neat narrative?

Within the neo-classical economics paradigm, where much of my core field, forest economics, lies, the principal assumptions of rational agents that maximize utility and profit underlie much of the research. However, the circulating jokey phrase ‘torturing the data until they confess’ (citation by the British economist Ronald Coase) has a serious undertone, and p-hacking in economics is wide-spread (Brodeur et al. 2022, 2024), as in ecology (Kimmel et al. 2023). Across economics and ecology, what do we do if the hypotheses in line with theory, are not supported by the findings? Are we ready to suggest the hypotheses to be rejected or would we instead rationalize the outcome?

Inherently, science is about questioning established theories and truth. Unexpected findings should be most welcome because of their potential to put forward new ideas and act as triggers to move science forward. As authors, reviewers, editors, and peers, we should embrace the idea that a paper without significant results or with significant results in the unexpected direction may be more important for science than another paper that reinforces the established ideas.

It is probably as hard to convince an ecologist to reject the hypothesis that harvest negatively impact biodiversity as to make a neo-classical economist to reject the idea of agents’ rational behavior. Even if many findings do not support them, these economic theories have proved useful to explain a wide variety of markets and agent behaviors. However, from the criticism of the neo-classical theories’ limitation in explaining behavior, the field of behavioral economics, considering multiple socio-psychological factors in decision-making, emerged. Today, behavioral economics is mainstream, but it has taken decades to reach this recognition since its modern dawn (Geiger 2017).

What should the scientific community do with normative biases? First, scientists have to acknowledge that as humans, we are by nature biased. Unconscious biases are the tricky ones, but every researcher should have the obligation to try to be conscious about how our own biases could potentially affect the research through how we set hypotheses, the literature we choose to cite, presentation of results, discussion of those, implications that we draw and how we ‘sell the story’. Even if neutrality is unattainable, it is an important ideal.

Open data is an important part of increasing credibility and reproducibility but is not enough. Reviewers and editors are central in several ways. Preregistering research design and hypotheses may counteract confirmation biases (Nosek et al. 2018) but comes at a cost of time and may act as a hindrance to the creativity process (McDermott 2022) and seem to fail in damping p-hacking (Brodeur et al. 2024). Editors and reviewers should ask for not only clearly laid out hypotheses but likewise apparent conclusions of the outcomes of testing. Reviewers and editors should give space to papers less based on their catchy results but more contingent on sound science practices. Non-significant results or results against the established truth can be a first step towards the development of new theories. At the end of the day, we should as scientists remind ourselves and our peers that our primary motivation should be to seek better understanding which necessarily will lead to question the established truth and those that go in front should be rewarded.

Hanne K. Sjølie
Subject Editor for Forest Economics and Policy

References

Brodeur A, Cook N, Heyes A (2022) We need to talk about mechanical turk: what 22,989 hypothesis tests tell us about publication bias and p-hacking in online experiments. IZA Discussion Paper 15478. https://doi.org/10.2139/ssrn.4188289.

Brodeur A, Cook N, Neisser C (2024) p-hacking, data type and data-sharing policy. Econ J 134: 985–1018. https://doi.org/10.1093/ej/uead104.

Fraser H, Parker T, Nakagawa S, Barnett A, Fidler F (2018) Questionable research practices in ecology and evolution. PLoS One 13, article id e0200303. https://doi.org/10.1371/journal.pone.0200303.

Geiger N (2017) The rise of behavioral economics: a quantitative assessment. Soc Sci Hist 41: 555–583. https://doi.org/10.1017/ssh.2017.17.

Kimmel K, Avolio ML, Ferraro PJ (2023) Empirical evidence of widespread exaggeration bias and selective reporting in ecology. Nat Ecol Evol 7: 1525–1536. https://doi.org/10.1038/s41559-023-02144-3.

McDermott R (2022) Breaking free: how preregistration hurts scholars and science. Polit Life Sci 41: 55–59. https://doi.org/10.1017/pls.2022.4.

Nosek BA, Ebersole CR, DeHaven AC, Mellor DT (2018) The preregistration revolution. Proc Natl Acad Sci 115: 2600–2606. https://doi.org/10.1073/pnas.1708274114.

Tange AC, Sjølie HK, Austrheim G (2024) Effectiveness of conservation measures to support biodiversity in boreal timber-production forests. Silva Fenn 58, article id 23057. https://doi.org/10.14214/sf.23057.


Register
Click this link to register to Silva Fennica.
Log in
If you are a registered user, log in to save your selected articles for later access.
Contents alert
Sign up to receive alerts of new content

Your selected articles
Your search results
McDermott R (2022) Breaking free: how preregistration hurts scholars and science. Polit Life Sci 41: 55–59. <a href="https://doi.org/10.1017/pls.2022.4" target="_blank"><span class="hyperlink">https://doi.org/10.1017/pls.2022.4</span></a>.