Despite the recent AI-supremacy hyperbole, even casual observers will have noticed how awful large language models (LLMs) are in providing expertise. LLMs are no strangers to committing egregious sins against the basic principles of common sense. Their answers to our queries are often preachy, obnoxious, and misaligned with our dearly held values. They confabulate shamelessly, pretending to know things they have simply conjured up from the void. We have all experienced the dashed hopes of hunting in vain for that perfect paper to which ChatGPT has so helpfully provided a full reference but is inexplicably absent from any catalogue. The “stochastic parrot” shall not be forgiven.

Except, what about the human experts? While there can be no doubt that many exceptionally skilled and knowledgeable individuals exist, the available data paints a rather mixed picture of expert performance. Systematic examination reveals expert judgement to be riddled with biases and random errors (Kahneman et al. 2021). This may be partly blamed on the inherent limitations of human reason and the frailties of human nature. But the institutional structures and social settings in which experts perform are also highly imperfect. These structures often contribute to conflicts of interest, reward unscrupulous behaviour, and deter experts from offering their most honest and helpful advice to the public. Many high-stakes contexts—like provision of scientific expertise or expert policy advice—today seem unusually exposed to pressures which skew expert judgement (Oreskes and Conway 2010; McKie 2024). On top of that, human experts tend to be scarce, slow, and pricey.

Therefore, whatever justified grudge we—the social scientists, the philosophers, the policymakers—may hold against LLMs, we should give them a fair chance to prove their mettle against the human experts. If we don’t, it will happen anyway, dragged along by increasing public demand and market dynamics. There are already a growing number of studies demonstrating LLMs’ remarkable performance in many tasks from translation to text summarisation or mental well-being advice. True, none of these areas are exempt from the machines’ annoying propensity to occasionally hallucinate or make silly—even dangerous—mistakes. And yes, the top human experts remain much smarter than the top LLMs, at least in their disciplinary niches. Nevertheless, we should resist the aloofness of the ivory tower and put ourselves in the shoes of an epistemically unprivileged layperson to assess LLMs’ advantages and disadvantages through a dispassionate lens.

As things stand, people have a hard time accessing expert knowledge. Every discovery of scientific fraud or advisors' corruption makes the public more uncertain about whom to trust, undermining the reputation of the established sources of information and exacerbating epistemic disorientation. Trust issues aside, even just finding and consulting an expert is often so costly that many try to make do with their own insights and common-sense intuitions. All this can easily go awry and one may end up misled, exploited, or even turning towards conspiracy theories. Expertise accessibility remains a major problem even in affluent societies, let alone in developing regions where expert scarcity bites the hardest at those most vulnerable.

Given these circumstances, let us appreciate the fact that LLMs are always ready to converse with anyone equipped with a smartphone and basic internet access. Many models are free of charge, some are open-source. Their use does not require specialised training. You just type—or talk—about anything and they are going to answer. Sometimes the answer will be clever and relevant, sometimes the opposite. But they answer immediately in a personalised manner; communicating on any topic for as long as you need and explaining whatever you desire in the language you understand (a huge advantage even relative to searching the internet). Yes, all of this has limits, all of this remains highly imperfect, at least for now. But perfection is not the appropriate benchmark. For many people, the alternative to expert advice by an LLM is no expert advice at all.

Moreover, there is the development trajectory. Transformer technology, the backbone of today’s LLMs, was invented in 2017. For the first few years, LLMs could scarcely put a coherent sentence in English together. Today, we complain when their Czech grammar is a little bit off or when they fail to grasp our complex meaning from a one-sentence prompt with four typos. How much progress have human experts made since 2017 in becoming smarter, more capable, more versatile?

In all likelihood, LLMs will keep improving. They are here to stay and they will become increasingly integrated into our everyday digital tools, further depressing the cost of their use and reducing friction. As far as we can see, the real question is how to shape both the technology itself and its socio-institutional embedding to minimise the risks and maximise the benefits. Tech-bashing and sneering won’t do. Neither will excessive gatekeeping.

It is an understandable instinct for human experts—especially those who find themselves under increasing competitive pressure from the technology—to advocate limiting the LLMs’ use to supervised settings for safety’s sake. That is, human experts wish to be left in charge, with the tech merely providing them with recommendations that can be overruled at will. For some settings, this can indeed be the optimal solution. However, as a client, one should beware of the self-serving element of such advocacy. If human experts remain in total charge, most of LLMs' benefits in accessibility and availability would evaporate. The gatekeepers will benefit, the common users perhaps not so much. Moreover, at some capability level, human input becomes a net negative. In chess, there was a time when AIs could be defeated by AI-empowered humans. Not anymore.

We believe that, as elsewhere, experimentation is key. This means continuous experimentation with LLMs’ capabilities and limits thereof, but also experimentation with the institutional architecture that could accommodate the technology. To some extent, such experimentation can be orchestrated rigorously by social scientists and policymakers. However, given the speed of LLM proliferation and capability gains, it will also proceed spontaneously and haphazardly, attempted by individuals and groups, all across the Earth. This provides us with another opportunity to observe and learn that should not be wasted.

One way or another, it is high time we started thinking about how to sail along with the impending wave. It is still early days, but the wave is coming. Complaints about LLMs’ inconsistencies are often justified; arguments for human superiority over machines can be cogent; hopes that the current breakneck pace of capability gains will fizzle out or be reined in by government intervention are relatable. But they also divert too much attention away from what actually needs to be done in terms of preparing for the more-than-likely scenario of LLMs' continued ascent.