Stronger legal and ethical safeguards needed as AI in health accelerates, warns WHO/Europe.
Artificial intelligence (AI) is already helping doctors spot diseases, reduce administrative tasks and communicate with patients across the WHO European Region. But who is responsible when an AI system makes a mistake or causes harm? A new report from WHO/Europe – Artificial Intelligence in Health: State of Readiness across the WHO European Region which includes Country Profiles – warns that the rapid rise of AI in health care is happening without the basic legal safety nets needed to protect patients and health workers. Based on responses from 50 of the Region’s 53 Member States, it provides the first comprehensive regional picture of how AI in health is being adopted and regulated.
“AI is already a reality for millions of health workers and patients across the European Region,” said Dr Hans Henri P. Kluge, WHO Regional Director for Europe. “But without clear strategies, data privacy, legal guardrails and investment in AI literacy, we risk deepening inequities rather than reducing them.”
Impressive progress but persistent gaps
While nearly all countries in the European Region recognize the potential of AI to transform health care – from diagnostics to disease surveillance to personalized medicine – readiness remains uneven and fragmented. Only 4 countries (8%) have a dedicated national AI strategy for health, and a further 7 (14%) are developing one.
“We stand at a fork in the road,” said Dr Natasha Azzopardi-Muscat, Director of Health Systems, WHO/Europe. “Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted health workers and bring down health-care costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours.”
Some countries are already taking proactive steps: Estonia links electronic health records, insurance data and population databases into a unified platform that now supports AI tools; Finland has invested in AI training for health workers; and Spain is piloting AI for early disease detection in primary health care.
Legal uncertainty remains top barrier to adoption
Across the Region, regulation is struggling to keep pace with technology. Nearly 9 out of 10 countries (86%) say legal uncertainty is the primary barrier to AI adoption. Eight out of 10 countries (78%) cite financial constraints as a major obstacle. Meanwhile, less than 1 in 10 countries (8%) have liability standards for AI in health, which determine who is responsible if an AI system makes an error or causes harm.
“Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong,” said Dr David Novillo Ortiz, Regional Advisor on Data, Artificial Intelligence and Digital Health. “That’s why WHO/Europe urges countries to clarify accountability, establish redress mechanisms for harm, and ensure that AI systems are tested for safety, fairness and real-world effectiveness before they reach patients.”
AI already in action – but investment lags behind
AI tools are increasingly present in the Region’s health systems. Thirty-two countries (64%) are already using AI-assisted diagnostics, especially in imaging and detection. Half of the countries in the Region (50%) have introduced AI chatbots for patient engagement and support, while 26 (52%) have identified priority areas for AI in health. However, only a quarter of countries have allocated funding to implement those priority areas.
On their top motivations for adopting AI in health, countries most frequently cited improving patient care (98%), reducing workforce pressures (92%) and increasing efficiency and productivity (90%).
Why this matters to you
For the general population, the use of AI in health care is associated with 3 core concerns: patient safety, fair access to care, and digital privacy. When seeking medical care, people expect their doctor or nurse to be responsible and accountable for any mistakes, but AI changes that dynamic. AI relies on data to learn and make decisions. If that data is biased or incomplete, the AI’s decisions will be too, which could lead to missed diagnoses, incorrect treatments or unequal care.
The report urges countries to develop AI strategies that align with public health goals, invest in an AI-ready workforce, strengthen legal and ethical safeguards, engage the public transparently and improve cross-border data governance.
“AI is on the verge of revolutionizing health care, but its promise will only be realized if people and patients remain at the centre of every decision,” concluded Dr Kluge. “The choices we make now will determine whether AI empowers patients and health workers or leaves them behind.”
- Is your doctor’s AI safe? - 20th November 2025
- New NYC ‘Liver ICU’ Lab Transforming Liver Disease Treatments - 24th October 2025
- Quantum Imaging Research could improve retinal scans - 17th October 2025