Lately the odds have confronted me with what I thought, was a topic I had left behind: university rankings. Having written my PhD thesis on university rankings, you can imagine how fed up I ended with all this business. Still, the fascination they provoke and the stimulating discussions they lead to (usually criticizing their use) traps me from time to time. While I thought these relapses were anecdotal, funnily, they have concentrated in the last 15 days. First, came the announcement of the release of the 2019 edition of the Leiden Ranking. This year, it includes gender and Open Access indicators. Me being involved in the development of the latter. Then, just a couple of days ago, our paper on Mining university rankings was finally accepted for publication in Research Evaluation and we uploaded an OA version of the manuscript.
A recent comment in Nature calls for ending with the uncritical reporting of p-values as the main criterion to determine the acceptance or rejection of a hypothesis. They claim that p-value reporting fosters a dichotomous way of thinking which leads to misinterpretation of results. That means, that having a significant result, does not mean there is an actual ‘difference’, nor a lack of significance means one should discard a potential difference. Read more “If reporting p-values is wrong, what is the alternative?”