Of Medicine and Machine Learning
How should regulators approach the challenge of regulating AI?
Angus Stewart, Stephanie Sutton | | Quick Read
Following a “horizon scan” exercise, the International Coalition of Medicines Regulatory Authorities (ICMRA) has released a report to help regulators address the challenges that stem from the increasing use of artificial intelligence. In medicine, AI (including but not limited to statistical models, algorithms, and self-modifying systems) is expanding into numerous areas including preclinical development, clinical trial data analysis, pharmacovigilance, and even clinical use optimization. According to the report, “This range of [AI] applications brings with it regulatory challenges, including the transparency of the algorithms themselves and their meaning, as well as the risks of AI failures and the wider impact these would have on its uptake in pharmaceutical development and ultimately on patients’ health.”
The report uses hypothetical case studies to examine the challenges posed by different applications of AI, including in pharmacovigilance and the use of apps for monitoring patients, such as those in clinical trials.
Recommendations for medicines regulators in the report include:
- Developing regulatory guidelines in a number of areas including data provenance, reliability, transparency and understanding, and the use of AI for pharmacovigilance purposes and real-world performance and monitoring.
- International standardization of good machine learning practices in biomedicine.
- Adopting a risk-based approach to AI assessment and regulation, which could benefit with collaboration with ICMRA. According to the report, “The scientific or clinical validation of AI use would require a sufficient level of understandability and regulatory access to the employed algorithms and underlying datasets.”
- Exploring the benefits of establishing a Qualified Person concept responsible for oversight compliance of AI.
- Engaging with ethics committees and AI expert groups to understand the ethical issues related to the use of AI in medicines development.
The report also examines AI activities at various medicines regulators; for example, Health Canada is developing its first industry guidance on “locked” AI/machine-learning enabled medical devices and has conducted surveys with marketing authorization holders on current and potential use of AI in pharmacovigilance systems. Meanwhile, Swissmedic has launched a digital initiative called Swissmedic 4.0 that looks at digital transformation, including the use of AI to detect safety signals.