1 Introduction

This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. AI—i.e. digital systems that perform tasks normally requiring human intelligence (Russell and Norvig 2021)—is poised to transform human medicine (Topol 2019; Wilson et al. 2021) and may prove equally transformative of veterinary medicine (Basran and Appleby 2022; WIRED Brand Lab 2022). Like human medical AI (Astromskė et al. 2021; Dalton-Brown 2020; Keskinbora 2019), veterinary AI raises important ethical issues. Although several papers touch on ethical aspects of veterinary AI (Appleby and Basran 2022; Ezanno et al. 2021; Steagall et al. 2021), including its implications for ‘livestock’(Neethirajan 2021),Footnote 1 a more detailed ethical evaluation of companion animal AI is wanting. Our analysis of AI’s ethical implications for companion animal medicine should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.

Veterinary practice raises unique ethical issues that stem from the client–patient–practitionerFootnote 2 relationship. Companion animals are potentially more exposed to harms from AI than are humans because they lack the same strong social, moral and legal status. For example, the law does not effectively protect animals from wrongful injury or from clients who seek unwarranted or unjustified ‘euthanasia’ (Favre 2016). These conditions are relevant to the ethics of veterinary AI. At the same time, medical AI raises its own distinctive ethical issues—issues like trust, data security and algorithmic transparency—which we also discuss in the veterinary context.

AI in veterinary medicine might be used for business purposes and hospital logistics like booking appointments. Technology that affects practitioner workflow could have ethical implications, as could other AI, such as language translation apps that enable communication with linguistically diverse clients. However, AI for triage, diagnosis, prognosis and treatment raises the most distinctive, complex and consequential ethical questions. We concentrate on AI for such medical decision-making.

Currently, AI enjoys massive public and private investment, propelled by stories like algorithms defeating Jeopardy and Go masters (Mitchell 2019). Another indication of AI’s rapid ascent are recent large language models like ChatGPT and text-to-image generators that demonstrate remarkable, though sometimes strange and biased, outputs (see Fig. 1). Yet most people are bewildered by the technical jargon of artificial neural networks, deep learning, computer vision, random forests and natural language processing (Waljee and Higgins 2010).Footnote 3 Veterinary practitioners too may not always understand, for instance, the ways in which AI learns from data and autonomously updates its algorithms to draw inferences about previously unencountered data (e.g. from patient radiographs or medical records)—and this may create uncertainty about its use in healthcare.

Fig. 1
figure 1

An image produced by OpenAI’s DALL-E program in response to the text prompt ‘a watercolor painting of veterinarians pondering the relationship between medicine, machine and animal’. Note how the AI presents veterinarians as middle-aged white men—an example of AI bias (Image obtained via open access website https://openai.com/dall-e-2/)

This issue of trust in technology is important. To some degree, medical AI remains just as much an art as a science (Quinn et al. 2021b), and AI developers are only now exploring how to apply modern machine learning (ML) methods successfully in medicine. This involves experimenting with how data are collected and pre-processed, how AI models are applied and optimised and how model performance is evaluated. Each step contains many nuances that could affect model operation in clinic settings and unintentionally harm patients and clients. While busy practitioners cannot be expected to understand all these nuances, they will increasingly need at least a basic understanding of the ethical risks and benefits of AI. This paper identifies and examines these ethical issues.

The paper runs as follows. Section 2 outlines medical AI in veterinary practice. Section 3 introduces ethical principles of AI, human medicine and veterinary medicine. Section 4 identifies and examines nine ethical issues raised by veterinary AI. Section 5 discusses important ethical norms in veterinary medicine and AI’s distinctive implications in that realm, as well as providing some practical guidance for AI’s use.

2 AI in veterinary medicine

Earlier medical AI involved knowledge-based systems, such as the 1970s program MYCIN (Barnett 1982; Schwartz et al. 1987). These ‘expert’ systems involved hard-coding medical expertise from experts to generate rules and infer clinical diagnoses. However, they struggled with the inherent complexity of medical decision-making (Partridge 1987). Modern ML has proved more adept. These models absorb vast data to ‘learn’ rules automatically in the form of mathematical functions that relate predictor variables to target variables. One very successful type of ML, deep learning, employs so-called ‘deep neural networks’ (DNNs) (Bengio and LeCun 2007). DNNs have layers of processing units linked together in patterns somewhat like brain neurons (Russell and Norvig 2021). There can be hundreds to billions of artificial neurons in DNNs and numerous layers.

AI today often involves ‘supervised’ machine learning in which samples that are used to train models are labelled. For example, an ML system may be trained on thousands or millions of biopsy images labelled as either cancerous or healthy tissue. Once trained, the model can be tested on new images to make predictions (e.g. about cancer) and can then undergo evaluation for diagnostic accuracy and compared with clinician performance. Ideally, the model is subjected to a clinical trial to establish efficacy and cost-effectiveness before being implemented in practice, where its effectiveness should continue to be assessed.

AI shows promise in veterinary medicine. For example, one ML algorithm for detecting canine hyperadrenocorticism had a sensitivity of 96.3% and a specificity of 97.2%, reportedly outperforming other screening methods (Reagan et al. 2020). Some models classify animal cancer, retinal atrophy, or colitis based on images (Zuraw and Aeffner 2021). Deep learning can be applied to detect faecal parasites (Nagamori et al. 2021) or identify canine cardiac enlargement (Li et al. 2020). Some models can outperform veterinary radiologists at certain tasks (Boissady et al. 2020), and others predict seizures in epileptic dogs from ambulatory intracranial sensors (Nejedly et al. 2019). AI might also improve veterinary surgery (Souza et al. 2021) and one day guide robotic veterinary surgeons (Esteva et al. 2019; Panesar et al. 2019). Natural language processing might usefully extract clinical information from patient records for analysis. Finally, there are direct-to-consumer AI products, such as one that predicts differential diagnoses for canine alopecia (Prevett 2019).

Potentially, some AI tools will be more accurate and faster than practitioners and cost-effective for clients. Perhaps, as some suggest, AI will bring “tremendous potential efficiencies and quality improvements in veterinary medicine” (Basran and Appleby 2022). But it also comes with risks and ethical concerns.

3 Principles in AI, medical and veterinary ethics

General AI ethics guidelines speak of ethical principles like transparency, accountability, data security, privacy, safety, fairness and environmental sustainability (Jobin et al. 2019). Many of these principles arise from the distinctive nature of AI and the special risks it creates. As we shall see, such AI ethics principles play a role in the ethics of veterinary AI. AI ethics also borrows from medical ethics (Mittelstadt 2019) and its four widely accepted bioethical principles: nonmaleficence (do no harm), beneficence (do good), respect for autonomy (respect a person’s ability to act on their own values and preferences), and justice (e.g. ensure fair distribution of medical resources) (Beauchamp and Childress 2001).

These medical ethics principles arguably apply in veterinary practice. For example, many would accept that veterinarians have responsibilities to promote patient wellbeing and avoid harming them and to respect the autonomy of clients. However, there are ethically-relevant differences with human medicine that can affect those principles’ application (Desmond 2022). For example, human medical practice is mostly funded by large public or private insurance schemes, whereas veterinary medicine is mainly paid for ‘out of pocket’ by private individuals, who sometimes struggle to afford medical attention for their unwell animals (Springer et al. 2022).Footnote 4 Consequently, some clients (and veterinarians) opt for cheaper and inferior diagnostics and treatment and even sometimes for ‘economic euthanasia’ (Boller et al. 2020).

Obviously, animal patients cannot provide autonomous consent for medical interventions.Footnote 5 Hence, companion animal medicine somewhat resembles paediatric medicine (and to some degree gerontology). Medical practitioners and Boards typically endorse an ethically patient-centred approach (Medical Board of Australia 2020) that prioritises significant patient interests over the interests of other parties like parents (Fleischman 2016). While most parents pursue their children’s best interests, paediatricians may override parental autonomy when parents refuse necessary interventions or urge harmful treatment for their children (Gillam 2016). While they respect parents’ interests, paediatricians see their primary duty as being to the patient.

Veterinary medicine has enjoyed comparatively less discussion—and agreement—about the right ethical principles to follow and how they should be interpreted (Beauchamp and Childress 2001; Desmond 2022). (There is also disagreement about what constitutes wellbeing for animals (Coghlan and Parker 2023). This has important implications for veterinary AI. Nonetheless, in what immediately follows, we can generally assume that clients and practitioners seek the best for the animals and broadly agree on what that involves. Accordingly, veterinary practitioners will broadly follow principles of nonmaleficence (avoid and minimise harm) and beneficence (do good and provide benefit) regarding patients. Furthermore, veterinarians generally respect the autonomy of their clients. These principles inform our identification of nine ethical issues in veterinary AI.

4 Ethical issues raised by veterinary AI

The nine ethical issues we identify and explain below (Table 1) refer to situations that demand ethical judgement about AI. Such deliberation may involve moral values, principles and theories. Later, we will see how these ethical issues variously affect the three parties in the central patient–client–practitioner relationship (and occasionally parties beyond it).

Table 1 Nine ethical issues for veterinary AI with examples

4.1 Accuracy and reliability

Accurate and reliable AI in pathology, radiography, medicine and surgery could significantly benefit patients, including by eliminating certain human biases and misjudgements. Equally, inaccurate AI could harm patients (and clients) through misdiagnoses and poor treatment recommendations. Importantly, some AI tools may be accurate in terms of test set evaluation but unreliable in clinical practice. This may occur when the training and test sets are not representative of the intended real-world use case or contain biases. For example, an AI screening tool trained to recognise pneumonia from the audio of coughs obtained from hospitalised patients may be accurate for patients in hospital but inaccurate for outpatients (Quinn et al. 2021b). And veterinary AI developed in Northern Hemisphere contexts may be less reliable in Southern Hemisphere contexts.

Even when AI is trained and evaluated on representative data and found to be accurate, this may not translate to improved clinical outcomes. Medical AI is frequently not well studied in this respect, despite the surrounding hype (Kim et al. 2019). Although randomized clinical trials are gold-standard in evidence-based medicine, a recent systematic review found that few medical AI studies use randomization and only 9/81 nonrandomized studies were prospective (Nagendran et al. 2020). Some AI is flawed by design. For example, AI purporting to diagnose emotions from photos of human faces has been criticised because expressions do not always correlate with emotional states (Crawford 2021b). This problem may also afflict AI for diagnosing animals’ affective, pain, or welfare states (Jaiswal et al. 2020).

4.2 Overdiagnosis

Overdiagnosis involves diagnosis of conditions that are harmless to the patient (Carter et al. 2015). For example, AI might identify harmless bone defects or ‘incidentalomas’ (Myers 1997). Overdiagnosis is a growing but frequently overlooked concern (McKenzie 2016) which can generate unnecessary additional testing and treatment (Capurro et al. 2022). A significant cause of overdiagnosis is large screening programs of apparently healthy individuals (Woolf and Harris 2012). Veterinary AI might significantly promote overdiagnosis and on a larger scale than before, including by promoting more defensive medicine (Sonal Sekhar and Vyas 2013). Therefore, AI-based overdiagnosis should be recognised and minimised where possible.

4.3 Transparency

Transparency broadly refers to users’ knowledge of how an AI system arrived at its prediction (Castelvecchi 2016). In deep neural networks, the reasons that underlie the model’s prediction can be intrinsically unknowable, due to the model’s enormous complexity. Such algorithmically opaque models are dubbed ‘Blackboxes’. For some AI, a trade-off may arise between model performance and intelligibility. Transparency can also be reduced by for-profit companies that conceal their AI’s workings from users and competitors. Even when AI models are open source and available, busy practitioners may find it too onerous to seek and digest such information.

Some believe that Blackbox AI is not problematic if it is accurate. After all, practitioners justifiably prescribe drugs with largely unknown mechanisms. It is true that use of opaque systems can also sometimes be justified. However, algorithmic opacity can hamper the detection of inaccuracies and biases in predictions. In contrast, interpretable AI can more readily be ‘caught out’ making mistakes, thereby aiding quality assurance and safety. Some therefore argue that medical Blackboxes should be altogether avoided (Rudin 2019), or else used only when equally accurate interpretable systems are unavailable (Quinn et al. 2021a) or when non-transparent systems are demonstrably and significantly superior.

4.4 Data security

Data used to train AI can be private, sensitive and extensive. Data stored locally or on company servers might be leaked, sold on, or hacked. Malicious agents mounting adversarial attacks can even render AI systems unreliable (Kelly et al. 2019), while anonymised health data can sometimes be matched with other data to reidentify individuals (Culnane et al. 2017). Models may suffer ‘attack’ and yield personally identifiable data previously used during training even after data deletion (Carlini et al. 2021). Veterinary-related data are not immune from these risks. Clients may thus have an interest in data security and in providing consent for reuse of their data, e.g., to further train AI tools.

4.5 Trust and distrust

Having trustworthy technology will be important if AI is to be beneficial (Parasuraman and Riley 1997). Unwarranted trust in AI can cause its misuse, while unwarranted distrust can cause disuse that deprives patients of benefits (Jacovi et al. 2021). For example, failure to employ inhouse AI that saves time on external pathology processing could cause critical time delays for sick animals. Distrust in AI by clients, perhaps exacerbated by troubling news stories or personal experiences, may even precipitate more general distrust in the veterinary profession. Distrust can rise for opaque systems (Ferrario et al. 2021), while excessive trust may result from ‘automation bias’ (Goddard et al. 2012). Conversely, humans sometimes wrongly ignore computer-based outputs, especially when outputs are obscure or prone to false alarms. AI companies may heavily promote their wares or even use medical AI to recommend their other products or tests and veterinarians invested in AI could face conflicts of interest. The veterinary profession should be aware of such commercial pressures and tactics that could influence clinical decision-making.

4.6 Autonomy of clients

Respect for the autonomy of human patients standardly requires obtaining their (or their guardians’) informed consent for interventions. This requires giving patients relevant information about the nature, risks and benefits of interventions (Beauchamp 2011). Plausibly, veterinary practitioners should similarly inform their clients of “the advantages, disadvantages and most likely outcomes for each [care] option; the possibilities of favourable and unfavourable outcomes; the likelihood that additional testing or treatment might be needed; the associated costs; and the strength of the supporting evidence” (Brown et al. 2021).

It has been argued that medical practitioners using medical AI should understand and convey its pitfalls to human patients (Geis et al. 2019). Respect for client autonomy at least prima facie requires that veterinarians explain to clients the broad nature, risks and benefits of chosen AI-based interventions, just as they do with other interventions. Furthermore, many clients may be ignorant, misinformed, or uncertain about AI, heightening the need for providing clear information about its pros and cons. For example, practitioners might need to explain how an AI tool can sometimes make misdiagnoses due its training data, or that it has not yet been subjected to rigorous clinical testing.

Veterinarians must normally explain to clients the general basis of their diagnoses and prognoses in ways non-medical people can understand. In Blackbox AI, however, algorithmic opacity precludes client (and practitioner) understanding of the reasons behind the machine’s predictions or recommendations. That may not trouble some clients, but others may prefer transparent AI that provides such explanations (Quinn et al. 2021b).

4.7 Information overload and skill erosion

Some AI might also improve life for veterinarians. Partial outsourcing of cognition to trustworthy AI ‘assistants’ may ease workloads (Basran and Appleby 2022). Yet AI, which is a complex and ever-evolving technology, might also increase information overload for veterinarians who already endure high workplace stresses (Pohl et al. 2022). Not all technologies make our lives easier—consider the way that household appliances have not always reduced domestic labour mostly undertaken by womenFootnote 6 (Cowan 1983). A recent survey found that 70% of medical practitioners believed “digital health technologies will be a challenging burden” and that they lacked “time to learn the value of the technology or foster the belief in their ability to use it…ultimately taking time away from patient care rather than improving it” (Elsevier 2022, pp. 52, 84).

Gradual erosion of medical skills through machine reliance is another theoretical possibility (Mittelstadt and Floridi 2016). Some skill erosion may be overall beneficial, as when generalists refer complex patients to specialists for improved health outcomes (Brown et al. 2021), although that change has sometimes reduced accessibility to healthcare. However, over-reliance on fast and convenient intelligent decision support tools (Kempt et al. 2022) might in time weaken medical skills that veterinarians should retain.

4.8 Responsibility for AI-influenced outcomes

Accountability is an important idea in AI ethics because it can be unclear who is legally and ethically responsible for AI-generated harms. The difficulty of assigning or determining liability is called the’responsibility gap’ (Santoni de Sio and Mecacci 2021). Responsible parties could include engineers, companies, practitioners, professional organisations, regulatory bodies and clinic managers and owners. Until medical AI reaches a very high degree of reliability, there is reason to say that individual practitioners must remain ethically and professionally responsible for using it. This is especially important for non-transparent AI where detection of harmful outputs can be more difficult.

4.9 Environmental effects

Although the environmental effects of healthcare generally (Lenzen et al. 2020), and of AI specifically, are often neglected, these harms can be considerable (Hagendorff 2021). Veterinary AI could contribute to AI’s overall environmental impact (Jones and West 2019). While veterinarians are rightly focused on their immediate patients’ wellbeing, there is a case for becoming more aware of veterinary medicine’s increasing environmental footprint (Koytcheva et al. 2021) and for seeking more sustainable AI tools where possible.

5 Veterinary AI and ethical responsibilities, risks and guidance

5.1 Role and responsibilities of practitioners

As we have shown, AI could have both positive and negative implications for patients, clients and practitioners. In companion animal medicine, the interests of these parties are often aligned: what benefits or harms patients often benefits or harms clients (and sometimes practitioners). Nonetheless, the interests and wishes of clients (and practitioners) and companion animal patients can sometimes conflict (Rosoff et al. 2018; Springer et al. 2021). This raises important ethical questions about veterinarians’ role and responsibilities (Kimera and Mlangwa 2015; Legood 2000; Magalhães-Sant’Ana et al. 2015; Moses 2018; Mullan and Quain 2017; Rollin 2006; Sandøe et al. 2015; Tannenbaum 1991; Yeates and Savulescu 2017; Yeates and Main 2010) and how they relate to AI.

While many veterinarians traditionally saw their primary obligations as being to the ‘owner’ of the animal rather than to the patient themselves (Rollin 2006), this profoundly human-centred view began to shift as societal attitudes to animals evolved and the profession began to appreciate the strength of human-animal relationships (Knesl et al. 2016; Serpell 1996). Nonetheless, veterinarians can still have different understandings of the strengths of their duties—differences which move to the forefront when the interests of patients and the wishes of clients or clinic managers conflict.

Most contemporary veterinarians would broadly claim to be advocates for their patients, yet ‘advocate’ admits of degrees. A strong patient advocate (Coghlan 2018) or ethically patient-centred practitioner is more determined to safeguard the patient’s interests and speak up on their behalf (Hernandez et al. 2018). While the patient-centred practitioner will not ignore clients’ perspectives and situations, such as economic insecurity (Brown et al. 2021), they will search hard for solutions that promote the patient’s important interests and they may sometimes refuse to go along with harmful requests from clients. Like paediatricians (Rollin 2006), patient-centred veterinarians prioritise beneficence and nonmaleficence towards the patient over, say, respect for client autonomy on those key occasions of conflict. They will also seek to safeguard patient interests when they receive pressure from other parties, such as peers or clinic managers, to act counter to their patients’ interests.

A veterinarian’s conception of their role and responsibilities could affect their behaviour toward AI. For example, some practitioners may more readily acquiesce to pressure from clients or clinic managers who are enthusiastic about AI and who urge the adoption of these tools despite the fact that those tools may lack rigorous scientific validation and/or feature an uninterpretable and relatively risky ML model. A patient-centred practitioner would only use veterinary AI in higher stakes situations when they had grounds to believe it would be of overall benefit to the patient.

Another example of how a practitioner’s ethical stance could influence their use of medical AI concerns the important ethical issue of euthanasia (Rollin 2006). Imagine that an AI system designed to make treatment recommendations for animals presents ‘euthanasia’ as an option for a patient who, despite their condition, could probably have a decent life with appropriate treatment. Although such treatment recommendations do not yet feature in AI, it is entirely conceivable that they will appear in some future veterinary AI.

If that happens, it is possible that the client (and veterinarian) could be influenced by an AI recommendation for euthanasia that is not ethically justified. While client-centred practitioners may agree to a client’s request for euthanasia based on an AI recommendation or option, an ethically patient-centred practitioner would strongly counsel the client to reject that aspect of the AI’s recommendation. The converse situation may occur when an AI recommends onerous and futile treatment for a dying patient who would thereby be made much worse-off and so suffer what has been termed ‘dysthanasia’ (Clark and Dudzinski 2013; Quain et al. 2021). If future AI makes treatment recommendations as well as diagnoses, veterinarians will need to be aware of the potential for uncritical acceptance of such advice from machines.

5.2 Distinctive risks associated with veterinary AI

Some risk factors are distinctive or especially salient for veterinary AI and are worth highlighting. First and perhaps most importantly, companion animals, unlike humans, are classed as legal property and enjoy relatively few social and regulatory protections (Sunstein 2003). Moreover, our societies remain profoundly human-centred overall, typically affording little moral consideration to animals compared to humans (Singer 1995). This pronounced ethical anthropocentrism shows itself in the fact that AI ethics has largely neglected nonhuman animals (exceptions include Owe and Baum 2021; Singer and Tse 2022)—both directly as subjects of AI itself and indirectly as subjects of the environmental impacts of AI (Coghlan and Parker 2023).

Consequently, some AI developers and some veterinarians may devote less energy and care than they might to ensuring that AI promotes patients’ interests (and may have less legal impetus to do so). Furthermore, most veterinarians work in small businesses or corporate-run hospitals; this could potentially result in pressure to increase profit and client turnover, which may overtly or subtly affect patient care (Rosoff et al. 2018), such as by promoting unnecessary testing and treatment.

Second, being less regulated than human medicine, veterinary medicine potentially affords more opportunities for experimenting with cutting-edge yet relatively untested treatments. Indeed, one sometimes hears the view that AI might be ‘tested’ on animal patients before being used on human patients. Quain et al. (2021) argue that the freedom to pursue various kinds of advanced but experimental veterinary care, such as stem-cell treatment, can sometimes (though not always) pose extra risks to patients. Misguided, faulty, or insufficiently tested AI also carries risks despite being a promising cutting-edge technology. AI can be used on animal patients without the same testing and regulatory approval (e.g. by the Food and Drug Administration) that human AI requires. Additionally, veterinary medicine has fewer resources for research into medical interventions and devices (Basran and Appleby 2022).

Third, there is currently qualitative and quantitative data scarcity for animals compared to humans for training ML models (Appleby and Basran 2022). Veterinary data records lack the requirements for consistency and standardisation sometimes imposed on human medical data records (Lustgarten et al. 2020). These factors might make it more difficult to develop and deploy effective and reliable ML models. (Note, however, that the relatively minimal legal regulation of animal health records could sometimes improve data access.) Although data scarcity can be overcome through data sharing agreements, such sharing also raises risks for the privacy of medical records.

5.3 Ethical guidance for AI developers, practitioners and veterinary bodies

As we noted, the ways in which practitioners approach AI depends partly on their ethical understanding of their role and responsibilities as veterinarians (as well as on their understanding of AI and level of enthusiasm for it). Let us assume that practitioners and clinic owners and hospital managers,Footnote 7 generally prioritise the interests of patients or act in ethically patient-centred ways. Drawing on the above analysis, we suggest the ethical principles and goals listed in Table 2 for governing AI use in veterinary medicine. Alongside the principles and goals, recommendations and examples are provided.

Table 2 Ethical principles/goals, examples of not meeting principles/goals and corresponding recommendations regarding veterinary AI

6 Conclusion

Veterinary medicine is a socially valued profession that, like human medicine, is likely to be significantly affected by AI. In this paper, we showed that veterinary AI creates risks, benefits and ethical issues that are both familiar from human medicine and unique or distinctive. Ethical responses to veterinary AI can be influenced by views about practitioner roles and responsibilities. In general, contemporary veterinarians aim to practice nonmaleficence and beneficence towards patients and to respect client autonomy. However, these principles may be differently interpreted. For example, a strongly patient-centred practitioner who prioritises patients’ vital interests may refuse to use insufficiently tested or excessively risky medical AI even when clients or clinic owners or managers improperly demand it. Equally, the patient-centred practitioner might persuade uncertain or sceptical clients that sufficiently validated and trialled AI tools can significantly benefit patients.

To provide guidance on using veterinary AI, we identified the following principles and goals: nonmaleficence, beneficence, transparency, respect for client autonomy, data privacy, feasibility, accountability and environmental sustainability (Table 2). We strongly recommend that the veterinary profession not allow AI developers, AI companies and insurance providers to dictate the design and uses of AI without proper consideration of relevant concerns, risks and ethical values. Awareness of commercial overhyping of AI and potential exploitation of animals and clients would be wise. Ongoing conversations may need to occur between practitioners, veterinary organisations, insurance companies, AI vendors and AI experts that address the ethical issues we identified (Table 1). Finally, as veterinary AI progresses, veterinarians may need education about the ethical issues it raises so that they can adequately protect and benefit their animal patients and human clients. Such education may need to begin at university (Quinn and Coghlan 2021) and extend into continuing professional education.