
A research team at Flinders University in Australia has recently demonstrated an evaluation framework powered by AI to test the practical application of clinical AI tools.
Called PROLIFERATE_AI, the evaluation tool expands from PROLIFERATE, a framework introduced by Flinders University Caring Futures Institute in 2021 that ensures healthcare innovations are designed to address user needs and improve health outcomes.
HOW IT WORKS
The AI-powered evaluation framework focuses on the adoption, usability, and impact of AI tools within networks of people, technologies, and processes. It integrates user feedback and predictive modelling to optimise these technologies to meet the needs of users, improve health outcomes, and enable sustainable practices.
The research team demonstrated this AI-powered framework in assessing an AI tool used in 12 emergency departments across South Australia, assisting doctors in diagnosing cardiac conditions quickly and accurately. Their findings showed that less experienced clinicians, including residents and interns, had usability challenges in using the technology, unlike their more experienced peers. This, according to the study published in the International Journal of Medical Informatics, highlights the importance of "role-specific training, workflow integration, and interface enhancements to ensure the tool’s accessibility and effectiveness across diverse clinical roles."
Since this first application, PROLIFERATE_AI has been utilised to refine human-machine interactions, emphasising the ethical considerations associated with healthcare AI and helping address privacy and ethical responsibility in deploying AI in high-stakes scenarios, such as emergency departments.
Late last year, a demonstration of the framework with CSIRO, Australia's scientific research agency, revealed that it can model and predict user interaction with up to 95% accuracy, allowing organisations to quickly adapt to user needs and enhance outcomes.
PROLIFERATE_AI is now being applied to a big project implementing ICU non-pharmacological agitation management guidelines.
WHY IT MATTERS
"In order to understand if the AI systems are viable, we look at how easy they are to use, how well doctors and nurses adopt them, and how they impact patient care. It’s not just about making AI accurate; it’s about making sure it’s easy to understand, adaptable, and genuinely helpful for doctors and patients when it matters most," explained research lead Dr Maria Alejandra Pinero de Plaza.
THE LARGER TREND
Various frameworks, guidelines, guidance, recommendations, strategies, and policies around the development, adoption, and application of AI in critical industries, including healthcare, have been released over the past few years. These include the World Health Organization's Ethics and Governance of Artificial Intelligence for Health, the EU Artificial Intelligence Act, the OECD AI Principles, Australia's 8 AI Ethics Principles, and Singapore's Model AI Governance Framework.
Currently, the US Food and Drug Administration is finalising its marketing submission recommendations for developers of AI-powered medical devices.
Last year, the National Institute of Standards and Technology in the US released an open-source platform for assessing data risks of AI and machine learning models, including those in healthcare. HITRUST also introduced last year its assessment framework for mitigating risks in AI deployment.