Perhaps my favourite class as a graduate student in anthropology was an elective taught by the chair of our department, an immensely affable man who, despite his stature in the discipline and successes as a scholar, was humble and approachable. The class was small, with only a handful of students sitting around a middling table in a cramped conference room—books, journals, artefacts, and dust taking up twice as much space as any of the rest of us. I was young and unclear what I wanted from life. And I wasn’t sure I fit within the program, likely because of that “Marxist” disease that afflicts most rebellious youth (and maybe still lingers with me today?) that “I don’t want to belong to any club that will accept me as a member” (Marx 1995, 321). So I enrolled in this seminar on the history of archaeology. Not exactly the most essential course every cultural anthropologist needs (that ought to have shown my adviser just how far into the liminal I was!), but it shaped in me a yet-unrecognized interest in concepts like logic, science, and modernity in an almost Foucauldian sense—not as an Archaeology of Knowledge (that training would come later) but, rather, through a “knowledge of archaeology.”

Though I didn’t know it at the time, the course tapped into a deep-seated curiosity about history and, in particular, the history of science and medicine. This perhaps should have been no surprise. Since young, with seemingly only one exception (the works of Shakespeare, which I adore and don’t spend much time worrying about where they came from), my interests have tended to centre less on what a particular writer, scholar, or discipline has purported (though still important) than how individuals or entities got that way—like “just-so stories” about who and what we claim are the movers and shakers of our world. It has made me more of an “armchair” anthropologist than a “real” one and provided me just enough knowledge of philosophy and history and various other disciplines to make me “dangerous.” Perhaps I am still longing to linger in the liminal, but at least now it stems from conscious critical interest rather than kneejerk rejections against authority.

Even twenty years later, I remember most every aspect and clear details of the class, the Xeroxed readings still stuffed into files that have travelled across three states and different twists in my academic journey, increasingly distanced from archaeology.Footnote 1 We covered the early “methods” of treasure hunters, who, Indiana Jones-like, sought the shiniest baubles with little regard for context, deeper knowledge, or the (true) preservation of heritage.Footnote 2 There was also the Franz Boas approach of amassing nearly uncountable “data of the most objective kind—material objects or texts”—so that, upon reaching some sort of critical mass, these artefacts “would speak for themselves” and offer a narrative history of a culture, living or dead (Wax 1956, 63). (Boas eventually departed from such museum-based work to focus instead on anthropology as an academic discipline and the study of behaviour, ideology, etc., in context [see, e.g., Cruikshank 1992].)Footnote 3 And then, of course, there was the rise of the “New Archaeology” in the mid-twentieth century and the intensive incorporation of scientific technologies and techniques, a logical-positivist pursuit of understanding peoples through their material remains.

But I also remember the day my professor, during critical discussions of this new processual and hypothetico-deductive approach, suggested I was “anti-science.” Whether he was serious or merely playing devil’s advocate, I balked immediately, protesting (perhaps too much?) that I loved science. That I thought of myself as a scientist. That it was science that had long since shaped my world view. But after a few decades of swooning over its techniques and various practitioners (and as a child thinking doctors surely must be the most knowledgeable people in any room!), the love affair was finally becoming one where I was no longer blind to the limitations of the object of my affection. Removed from its high pedestal, I was now beginning to see it and hold it and have a real relationship with it, imperfections and all. And I started to ask questions about what kind of knowledge is produced from these different approaches, what biases and blind spots do they generate, especially once high technology becomes involved, and—in Crichton-like Jurassic Park (1990) fashion—does our focus on the “how can we” tend to overshadow the “whether we should.” I had no name for my critique then, nor was I schooled enough to make an elegant argument, but my gut was reacting against the “scientism” that had expanded in the field of archaeology and elsewhere.

As Jean Pouilloux critically reflected two decades after the introduction of this processual archaeology:

“New archaeologists” thought that … [i]t is not enough to bring to an investigation our ingenuity, method and our whole arsenal of means of exploration, preservation and understanding. To begin with, we must ask the questions and define the problems. Research then has no other purpose but to reveal their value, or lack of it: a logical as well as an archaeological procedure. The monuments, formerly symbols, were transformed into arguments, with all the ambiguity implied by the term. In this way the new archaeology, called “theoretical archaeology” by Jean-Claude Gardin, was born. It was a paradoxical and precisely orientated step, enough to make our predecessors shudder, for it is obviously only possible in the light of a philosophical hypothesis on the evolution of human societies and, in a word, of an ideology (Pouilloux 1980, 312).

And thus began debate and tensions among archaeologists, whether they identified as “Traditional,” “Classical,” “New,” or, eventually, “Post-Processual.” Writing at the same time as Pouilloux, proponent Colin Renfrew described the New Archaeology thusly:

For world archaeology, in the past two decades, has undergone a deep and I think fundamental transformation. Its task is now seen by many not simply as describing and in that sense “reconstructing” the past, in the belief that to know all is to understand all. Instead the archaeologist must use a lively and active mind, identifying the problems in human development which he can hope to solve. Archaeology thus, in its fundamental nature, has much in common with the sciences, or should I say the other sciences, which proceed by recognizing and then often solving problems, both great and small. It follows that most problems are best tackled in as wide and general an intellectual context as possible, that is, in a global context (Renfrew 1980, 293).

Not everyone saw this new “science”-based archaeology as beneficial or appropriate, although it brought to the discipline and understanding of (pre)history advantages as well as cause for concern (one positive was a less culturally and geographically myopic view of “civilization”). Renfrew, in the same speech at the Archaeological Institute of America, made plain the detractions lobbied at it (while attempting to “head off at the pass” the ever-insightful and humorous Kent Flannery):

The so-called “New Archaeology” unfortunately has been treated in some quarters as a cult, and like any cult means many things to many people. Of course one man’s cult is another man’s heresy, and the developments in archaeological theory have been written off by many students of the Ancient World as a jargon-laden and woolly attempt to impose a mathematicist and scientistic straight-jacket on the humanism and liberal scholarship of the Great Tradition. There is of course plenty of evidence to support such a view, and the New Archaeology, ten years after, is now a house with many mansions, not all of them brilliantly illuminated. As Kent Flannery wrote in his entertaining review of the scene, “Archaeology with a Capital S”: “From a Southwestern colleague I learned last year that ‘as the population of a site increases, the number of storage pits will go up.’ I am afraid that these ‘laws’ will always elicit from me the response, ‘Leapin’ lizards, Mr. Science.’ Or as my colleague Robert Whallon once said, after reading one of these undeniable truths, ‘If this is the “New Archaeology” show me how to get back to the Renaissance’” (Renfrew 1980, 293–294).

Preparing this issue of the Journal of Bioethical Inquiry and its symposium on “Bioethics and Epistemic Scientism”—guest edited by Christopher Mayes, Claire Hooker, and Ian Kerridge—resurrected long-lost thoughts about that history of archaeology class I took so many years ago. And as I gazed back on papers I once read as part of class assignments and searched journal archives for others to help remind me what the debate and fuss were all about, it was intriguing to rediscover Renfrew’s own words. Why was it so important to intentionally “correct” himself to emphasize that archaeology not merely “has much in common with the sciences” but is, indeed, a member of that scientific class itself? Is it only science that “recogniz[es] and then often solv[es] problems” (Renfrew 1980, 293)? And what of Renfrew’s half-admission that the New Archaeology may have clad its experts in “a mathematicist and scientistic straight-jacket” (Renfrew 1980, 293)? How could I not have remembered seeing the term, the concept, laid out in front of me? Perhaps, when I was a young student contemplating the ramifications of a New Archaeology approach and grappling with whether this made me “anti-science,” I was just beginning in my own life to loosen that straightjacket’s hold. “There has been much talk about scientism of late,” Massimo Pigliucci notes in “Scientism and Pseudoscience: A Philosophical Commentary,” predicting that there will “be quite a bit more in the foreseeable future” (2015, ¶1; see also, e.g., Burnett 2015), but there it was, at least as far back as a third of a century ago, openly identified by Renfrew.

It is certainly not a new concept, not even when Renfrew was writing, just one that gathers attention at key points in many disciplines’ histories. It also has been seen “both as a term of ridicule and as a badge of honour” (Pigliucci 2015, ¶3). Its definition has varied, but the papers in this symposium that examine scientism within bioethics describe it as:

  • Something that “began as a denigratory label, used to point out instances of unwarranted aping of the natural sciences by the humanities … or of scientists attempting territorial advances into fields where they do not belong … or else unfairly dismissing the contributions of humanistic fields to human understanding” (Pigliucci 2015, ¶2).

  • Citing Susan Haack, an “uncritically deferential attitude towards science [and] an inability to see or an unwillingness to acknowledge its fallibility, its limitations, and its potential dangers” (Mayes and Thompson 2015, ¶5 under “Introduction”).

  • An “epistemological position that privileges the evidence, knowledge, and method of the natural sciences over (or to the exclusion of) other modes of inquiry” (Mayes, Hooker, and Kerridge 2015, ¶3).

This debate, which I remember so distinctly in relation to archaeology, goes to the heart of that epistemological question: how do we know what we know? And it makes us ask: what do we gain or lose by being scientistic? It seems that Boas, often called the “father” of American anthropology, struggled with this (see Wax 1956), and sitting in that seminar, that’s what I wanted to know as well. It wasn’t that I was keen to throw out the scientifically-informed baby with the bathwater; I was just recognizing that the bathtub had room for more.

It some circumstances, however, we may have let these other children be bullied by the one. As Mayes, Hooker, and Kerridge write in their lead essay to the symposium: “Indeed, the history and philosophy of science has for more than a century insisted that the ‘inside’ of science—a messy, value-laden, emergent, trialled-and-errored, accidentally-collective network enterprise—bears little resemblance to its smooth, authoritative discursive claims on the ‘outside’” (Mayes, Hooker, and Kerridge 2015, ¶3). The caveat likely can be heard in many a bioethics course or public health class or even medical training. Medicine is as much art as science (or some combination of the two), we tell our students. Healthcare requires EQ (emotional intelligence) and not just IQ. There is a place in medical education for the laboratory sciences and the humanities—and emphasizing only one, eschewing the other, ignores important components in the practice of healing. This is vividly illustrated in a vignette from Arthur Kleinman’s (1988) The Illness Narratives about a woman named Mrs. Flowers and the curt dismissiveness of her doctor. It is a brief but memorable example I use in my health communication course and one that I have shared with hospital-based practitioners during workshops on cultural competence and communication. In the two-page doctor–patient encounter, Mrs. Flowers provides a rich primary account of her life and her illness and how the two intersect, while Dr. Richards filters out as ephemera anything that doesn’t fit his scientistic understanding of the disease (hypertension) and its “solution” (a change in prescription and enforcing a low-salt diet). As Michel Foucault (1994) described in The Birth of the Clinic, Richards, with his well-honed medical regard, gives no thought to communicating directly with the disease and placing his patient in parentheses.

This question—what do we gain or lose by being scientistic—touches the very essence of bioethics, and true healthcare remains impossible whenever providers, educators, and policymakers fail to attend to it.

The question, of course, doesn’t belong to bioethics alone, but to any investigative pursuit or discipline of enquiry. I think of my archaeology course every fall when my health communication students and I start off the term with Malcolm Gladwell’s (2005) Blink and his discussion of the Getty Museum’s purchase of a purported ancient Greek kouros. Who is better at authenticating a find or identifying a forgery, Gladwell asks, the scientist or the connoisseur? It is not a simple answer, despite that “smooth authority” of science most of us have come to know. Both have their strengths and their foibles. Either can save the day or lead one to trouble. But surely there is room enough in the tub, as McHugh and Walker (2015) offer in “‘Personal Knowledge’ in Medicine and the Epistemic Shortcomings of Scientism.” Building on Michael Polyani’s ideas, they “propose that knowledge can be described along two intersecting ‘dimensions’: the tacit–explicit and the particular–general” and suggest that “[t]hese dimensions supersede the familiar ‘objective–subjective’ dichotomy” (McHugh and Walker 2015, under “Abstract”), giving practitioners in any field an approach (like Gladwell does in more layman’s terms in Blink) that attends to multiple sources of knowledge and how they may be shaped by context, experience, and culture.

In the Getty example in Blink, it was the connoisseur who was more capable of answering the call, with multiple scientific tests only leading the museum astray. This was also the case in relation to a self-portrait by Albrecht Dürer, as Edward Dolnick (2008) tells in The Forger’s Spell, which had been loaned out to artist Abraham Küffner in 1799. Painted in 1500 on a wooden panel, Küffner sliced Dürer’s artwork in half vertically and forged a copy on the newly revealed surface—which still retained all markings of provenance on the back—keeping “the real Dürer for himself” (Dolnick 2008, 116). Dolnick and others (e.g., Salisbury and Sujo 2009) explain that attending to provenance or material authenticity is “a time-honored strategy” for forgers: “for mundane details like bills of sale and identification of numbers on a frame often seem objective and authoritative in a way that a connoisseur’s opinion cannot” (Dolnick 2008, 115).

That is not to say that science cannot offer essential insight and act as a bulwark against human error and other shortcomings. Although art historian Cornelis Hofstede de Groot adamantly insisted in his 1925 True or False? Eye or Chemistry that “the only way to resolve questions of artistic authenticity was by relying on the connoisseur’s eye” and that “[s]cientific investigations were beside the point at best and misleading at worst” (Dolnick 2008, 114), there are many times when we put the cart before the horse:

So primed are we to see what we want to see (and to reject what runs counter to our hopes and expectations) that psychologists and economists have an entire vocabulary to describe the ways we mislead ourselves. “Confirmation bias” is the broad heading. The idea is that we tell ourselves we are making decisions based on the evidence, though in fact we skew the results by grabbing up welcome news without a second glance while subjecting unpleasant facts to endless testing (Dolnick 2008, 225).

Whether in bioethics or medicine, art or archaeology, “[s]cience teaches us to challenge our preconceptions” (Dolnick 2008, 226) and offers methodologies that generate productive knowledge. That said, “[t]here are some mistakes it takes a Ph.D. to make” (Daniel Moynihan cited in Dolnick 2008, 227).

I am sometimes still accused of being “anti-science” (particularly by colleagues who disparage the qualitative methods I employ or my collaborations with philosophers). If the appellation means always asking those epistemological questions and retaining a healthy level of scepticism, even of science itself, I’ll take it. After all, as Dolnick notes: “The ideal audience [for deception] knows a great deal about how the world works and, just as important, prides itself on that knowledge. Any magician would rather take on a roomful of physicists than five-year-olds” (Dolnick 2008, 228).