
The global AI healthcare market, valued at USD 29.01 billion in 2024, is projected to reach USD 504.17 billion by 2032. In Europe alone, the market is expected to grow from USD 7.92 billion in 2024 to USD 143.02 billion by 2033, with an incredible 38% annual growth rate.
The growing adoption underscores AI's significant potential across many areas of healthcare: It can enhance the accuracy and early detection of diseases, support personalised treatment plans, streamline administrative tasks such as billing and scheduling, and improve hospital resource management through predictive analytics. In clinical practice, AI is already showing impact in areas such as early detection of sepsis and improved breast cancer screening.
As Antoine Tesnière, a professor of medicine at and managing director of PariSanté Campus, noted in an interview with HIMSS TV: "AI is a true revolution for healthcare. AI tools allow us to understand that we will have super-precise, super-productive, super-preventive, super-personalised approaches in the very near future."
AI is advancing beyond merely assisting clinicians make decisions. "The level of performance is approaching the human one as of today, but it will surpass the human level of performance, bringing new horizons for overall performance of healthcare," Tesnière said.
Critical challenges with AI
Despite growing enthusiasm, significant concerns remain. "Bias can impact clinical decision-making and patient care when we deploy algorithmic tools," said Dr. Jessica Morley, postdoctoral researcher at the Yale Digital Ethics Center, in a HIMSS TV interview. She points to current limitations in systems such as arrhythmia detection devices that typically do not work as well on people with darker skin and melanoma recognition algorithms failing across diverse populations.
Morley also identifies a deeper systemic issue that she calls the "inverse data quality law": "Where you have the greatest need, you often have the lowest availability of high-quality data." This fundamental challenge means that creating equitable AI systems requires addressing both technical limitations and governance obstacles.
Despite these obstacles, Morley remains optimistic that the right approaches can overcome the current challenges. She believes innovations like secure data environments offer a viable path forward: "It is entirely possible to still protect individual patients' data and still leverage group-level population health benefits. You can have your cake and eat it, too," she claims.
Balancing innovation and protection
To address the AI challenges, the European Union has established two landmark regulatory frameworks to ensure healthcare AI development balances innovation with ethics, transparency, and respect for fundamental rights.
The EU Data Act aims to improve access to data generated by connected medical devices, helping to create more diverse and representative datasets while reducing the risk of algorithmic bias. Meanwhile, the EU AI Act sets out clear requirements for high-risk AI systems in healthcare, introducing safeguards such as mandatory impact assessments, human oversight, explainable AI models and data verification.
Together, these frameworks seek to support an environment where healthcare AI can deliver precise and personalised care while maintaining trust, fairness and accountability.
Antoine Tesnière, managing director of PariSanté Campus, and Dr. Jessica Morley, postdoctoral researcher at the Yale Digital Ethics Center, will be speaking at HIMSS Europe 2025, taking place in Paris 10-12 June.