Skip to content
Nicolas Robinson-Garcia
  • Home
  • Projects
    • LeaDing Fellows-MSCA ECOSCI
    • Bibliometría o Barbarie
    • SCI-COMM COVID-19
  • Research topics
    • Altmetrics
    • Open Science
    • Scientific mobility
  • Publications
  • Teaching
  • CV
    • Bio excerpt
    • Extended CV (pdf)
    • Short CV (pdf)
  • Notes
Paper notes

If reporting p-values is wrong, what is the alternative?

  • March 26, 2019April 17, 2019
  • by Nicolas Robinson-Garcia

A recent comment in Nature calls for ending with the uncritical reporting of p-values as the main criterion to determine the acceptance or rejection of a hypothesis. They claim that p-value reporting fosters a dichotomous way of thinking which leads to misinterpretation of results. That means, that having a significant result, does not mean there is an actual ‘difference’, nor a lack of significance means one should discard a potential difference.

“Don’t say statistically significant”

The main critique has little to do with the statistical use of p-values, and in fact, the authors of the comment do not suggest that p-values should be banned from research. On the contrary, it has to do with the why they are interpreted, easing the path into straightforward answers, instead of confronting uncertainty. Hence, the criticism does not necessarily affect to the way things are done but to the way things are said and shown.

As an alternative, they suggest talking about ‘compatibility intervals’ instead of confidence intervals, that is, concluding for instance, that some results are compatible with the hypothesis x or y. They also indicate that more importance should be given to the estimate, as it is the most compatible point and values near it are more plausible than further ones. Furthermore, the thresholds used in significance tests (usually 0.05) are arbitrary and may not be justified many times. Finally, the fact that a result is significant or not might be due to the statistical assumptions made on the design of the model.

Searching for alternatives

If one of Nature’s comments presented the problem, the other piece presented some alternatives. Here five statistician offered some advice, and I was happy to see that one was J. Leek. I must confess I greatly admire the work by Jeff Leek and his group, and have followed for many years their blog Simply Statistics. Following I summarize some of those solutions:

  1. Use graphs to illustrate differences (preferably bars).
  2. Report in a non-misleading way.
  3. If theory and common sense go against a statistically significant result, you should question it.
  4. Be open on your data retrieval, processing and reporting practices.
  5. Report false-positive risk.

 

Rabietas y pataleos académicos
Towards more nuanced altmetric methods and indicators
p-values statistics

Related articles

The difference between i.e.; e.g.,…
Notes on papers I read…
Notes on papers I read…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Nullius in verba

(Not on authority)

Recent Posts

  • The difference between i.e.; e.g., and cf. January 27, 2021
  • Understanding the peer review system January 14, 2021
  • Peer Review II – Elaborando la respuesta a revisores November 27, 2020
  • New research line on COVID-19 and scientific communication June 24, 2020
  • Science covers our work on COVID-19 May 27, 2020

Categories

  • Bibliographies 2
  • Opinion 6
  • Paper notes 4
  • Projects 2
  • SciComm 11
  • Self-promotion 6
  • Uncategorized 1
Theme by Colorlib Powered by WordPress
  • Twitter
  • github
  • Flickr
  • LinkedIn
  • Slideshare