Abstract
Problem: Evidence is quantified by statistical methods such as p-values and Bayesian posterior probabilities in a routine way despite the fact that there is no consensus about the meanings and implications of these approaches. A high level of confusion about these methods can be observed among students, researchers and even professional statisticians. How can a constructivist view of mathematical models and reality help to resolve the confusion? Method : Considerations about the foundations of statistics and probability are revisited with a constructivist attitude that explores which ways of thinking about the modelled phenomena are implied by different approaches to probability modelling. Results: The understanding of the implications of probability modelling for the quantification of evidence can be strongly improved by accepting that whether models are “true” or not cannot be checked from the data, and the use of the models should rather be justified and critically discussed in terms of their implications for the thinking and communication of researchers. Implications: Some useful questions that researchers can use as guidelines when deciding which approach and which model to choose are listed in the paper, along with some implications of using frequentist p-values or Bayesian posterior probability, which can help to address the questions. It is the – far too often ignored – responsibility of the researchers to decide which model is chosen and what the evidence suggests rather than letting the results decide themselves in an “ objective way.”.