In lieu of an abstract, here is a brief excerpt of the content:

  • Evidence for the Non-Evidenced:An Argument for Integrated Methods and Conceptual Discussion on What Needs to be Evidenced in Psychotherapy Research
  • Femke Truijens (bio), Melissa Miléna De Smet (bio), Reitske Meganck (bio), and Mattias Desmet (bio)

With its focus on evidence, psychology has grown into a mature, professional, and scientifically supported practice over the last decades. In general, psychotherapy and psychological counselling have shown to be more efficacious than waiting it out (Wampold & Imel, 2015) and a staggering 350 specific treatments have been scientifically supported as effective (Kazdin, 2015). Although, evidencebased treatments seem to work equally well, not all people benefit from evidence-based treatments, and it often remains unclear why. This raised the field-wide concern of what works for whom (Meganck et al., 2017) and sparked a wealth of research on specific ingredients, common factors, process factors, and intra- and inter-individual predictors (Cuijpers, Reijnders, & Huibers, 2019). Considering that about 10% of people in treat ment deteriorate (Lambert, 2011) or experience treatment burdens (Demain et al., 2015), this shift towards more fine-grained and complex mechanisms is not just a scientifically interesting but rather a clinical-ethical imperative.

In the first place, this asks for a joint thinking exercise on the philosophical and methodological foundations of both our science and practice. It is not a question whether but rather which evidence we need where general evidence does not fit. Concretely, this asks for discussions such as between Truijens et al. (2021), McLeod (2021), and Kious (2021). Discussions like these have the tendency to become dichotomized: science versus practice, objective versus subjective, quantitative versus qualitative, aggregated versus individualfocused. The responses by McLeod and Kious represent different poles in the debate, as both [End Page 137] provide thorough and nuanced arguments yet end up in almost opposite directions—let's call it "statistical/methodological foundationalism" versus "person-centered/constructivist pragmatism." However, as the parties in the debate do share their clinical aims, the question is whether an either–or approach brings us any further. Especially considering the much-discussed research–practice gap (Iwakabe & Gazzola, 2009), we urge to look for common ground rather than for differences. Indeed, we can take the methodological foundation of evidence (see Kious, 2021) as our starting point and dig deeper into the local validity of epistemic assumptions that are easily taken for granted (see McLeod, 2021).

One prominent assumption in psychotherapy research is the assumption of generality. Using probability statistics, we study "the majority" based on mean behavior and outcome, which we assume to be distributed in an orderly manner. By prioritizing a statistics-based methodology, we inherently define "evidence" in a methodological rather than a conceptual way. For example, in the use of mean statistics the exception will by principle be regarded as outlier or error. Consequently, we have limited means to study where the mean does not hold, such as people with more severe, chronic, or complex symptoms or less response to evidence-based treatment protocols. This way, cases that are part of everyday clinical practice in research contexts become "error."

This conceptualization-by-methodology has impact on clinical practice and research epistemology. We tend to assume that if we design a study properly, use validated measures, provide every participant with the exact same procedure, the design is internally valid and will thus suffice in principle to reach our purported evidence (Kious, 2021). In Truijens et al. (2021), we discussed three vignettes that show a problem in the execution phase of research that may pose substantial problems to the validity of evidence, regardless of the thoroughness of the trial's design (Meganck et al., 2017). Kious (2021) is quick to assume that these are mere exceptions, but as McLeod argues correctly, we have frankly no idea whether this regards random or systematic error. By using cases instrumentally (Stiles, 2007) we have not argued that we have provided conclusive evidence, but rather that we need to gather such evidence to know more about assumptions that are easily black boxed by statistics. A truly evidence-based science and practice requires evidence for all principles as well as real-world variations, not just the ones that suit our method best.

Only when such...

pdf

Share