851-0517-06L  Meta-science: Evaluating and Combining Scientific Evidence

SemesterSpring Semester 2017
LecturersM. van Assen
Periodicitynon-recurring course
Language of instructionEnglish

AbstractProviding tools and improving intuitions to enable the evaluation and combining of scientific evidence. After this course participants have basic understanding of meta-analysis, publication bias, replication, traps of statistical intuition, questionable research practices, research misconduct, and properties of big data.
ObjectiveThis course may be relevant to students and researchers of all disciplines, who want to know more about (i) evaluating and interpreting scientific findings, and (ii) recognizing and possibly adjusting for problems when interpreting these findings.
One of the most crucial skills of scientists is to evaluate and interpret scientific findings. This is extremely challenging, because of problems with how the system "science" currently functions. Most problematic are publication bias, the lack of replications, questionable research practices, and research misconduct.
Understanding of basic statistics and meta-analysis is necessary for being able to evaluate and interpret scientific findings. Hence I start with recapitulating basic statistics (including hypothesis testing and power-analysis), and an explanation of meta-analysis. I will illustrate power-analysis and meta-analysis with easy-to-use programs.
A major problem of science is publication bias, i.e., mainly statistically significant findings are published. I will explain the (horrible) effects of publication bias, evidence of publication bias, and ways to detect and correct for publication bias in fields of science and in meta-analyses.
A second problem is the lack of replications, which hampers the correction of the scientific literature (i.e., false positives become undetected). I will present evidence on the reproducibility of science in different fields, and how to assess reproducibility. I explain how to estimate effect size based on an original statistically significant findings, which is likely biased because of publication bias, and a replication study.
What aggravates problems and evaluating scientific findings are poor statistical intuitions of researchers. I'll provide some examples showing that all people, from lay people to students and experts, misinterpret statistical results on findings.
The third problem is questionable research practices. After explaining what questionable research practices are, I discuss their prevalence, their effects on scientific findings, and methods to detect them.
Fourth, I discuss research misconduct in the form of data fabrication and falsification. I'll discuss investigations on research misconduct, and methods to detect it, based on my experiences in the Stapel case.
As an application of problems in evaluating and interpreting scientific findings, I discuss the analysis of big data. Big data are booming in many sciences, such as genetics, medicine, neurosciences, and social sciences, but intuitions on and methods how these data should be analysed are still lagging behind.