How AI is reshaping pharmacovigilance: key takeaways from the new CIOMS report

     

    For the first time, artificial intelligence (AI) is transitioning from concept to practical application in pharmacovigilance. The newly released 2025 CIOMS Working Group XIV report offers an internationally aligned framework for responsible AI use in drug-safety activities. It outlines emerging opportunities, highlights potential risks, and provides direction for regulators, life science organizations, technology developers and healthcare professionals.

    Let's take a look inside.

     

    Why AI matters for pharmacovigilance

     

    Pharmacovigilance has grown more complex over the past two decades. Adverse event reporting continues to rise, while safety teams must analyse a wider range of data sources than ever before—spontaneous reports, clinical trial data, scientific literature, social-media content, and real-world evidence. Many activities are still performed manually, making speed and consistency increasingly difficult to maintain.

    AI is already helping address these challenges.

    Current applications include:

    • Case processing, including medical coding, translation, and extraction of key information

    • Duplicate detection across large safety databases

    • Signal detection, supported by automated screening and prioritisation

    • Triage tools that route high-priority cases to specialist reviewers

    • Search and summarisation, using early implementations of large language models (LLMs)

    Some capabilities are still in early development, but many are now integrated into routine pharmacovigilance workflows.

     

    Sumatha ISOP presentation pharmacovigilance Boston

    I discussed the convergence of pharmacovigilance and AI at ISoP's Intelligent Automation Summit in late 2025 

     

    FURTHER READING: Check out our pharmacovigilance software

     

    7 principles for responsible AI in pharmacovigilance

     

    To ensure responsible and sustainable adoption, CIOMS provides a set of guiding principles rather than detailed technical instructions. These principles are meant to endure as AI technologies continue to evolve.

     

    1. A risk-based approach

     

    Oversight should match the level of risk. Key factors include how much an AI system influences decisions and the consequences of incorrect outputs. High-stakes systems require rigorous controls, while lower-risk applications with meaningful human oversight call for proportionate measures.

     

    2. Human oversight

     

    Human accountability remains essential. CIOMS distinguishes between:

    • Human-in-the-loop systems where people make the final decision

    • Human-on-the-loop systems where automation plays a larger role but is continuously monitored

    Organisations should plan for changes in roles and skill requirements as AI adoption grows.

     

    3. Validity & robustness

     

    AI systems must perform reliably in real-world settings. This requires representative testing, appropriate data, transparency around limitations, and monitoring performance across relevant subgroups. Because rare events are central to pharmacovigilance, evaluation datasets often need targeted enrichment.

     

    4. Transparency

     

    Stakeholders should be able to understand how AI systems operate. Transparency includes clear descriptions of purpose, data use, expected outputs, and known limitations. Explainability methods can support trust, although they provide plausible reasoning rather than a precise view of model behaviour.

     

    5. Data privacy

     

    AI introduces new privacy considerations, especially with LLMs that interact with sensitive health information. Responsible use requires privacy-by-design practices, data-protection impact assessments, and strong governance for linked datasets. As privacy regulations evolve worldwide, pharmacovigilance systems must remain adaptable.

     

    6. Fairness & equity

     

    Bias can enter through training data, model design, or deployment. To minimise inequities, datasets should reflect the populations who use the medicines, and model performance should be evaluated across relevant subgroups. Many fairness challenges originate from gaps in reference data, highlighting the need for early detection and mitigation.

     

    7. Governance & accountability

     

    Clear ownership and robust documentation are essential. Effective governance includes version control, traceability, ongoing monitoring, and defined responsibilities throughout the AI lifecycle. Governance frameworks should evolve alongside technology and regulatory expectations.

     

     

     

    What’s next for AI in drug safety work?

     

    The CIOMS report points to several developments that will shape the future of pharmacovigilance:

    • Near real-time safety monitoring, supported by integrated data environments

    • Earlier application of AI during drug development, shifting attention from post-approval detection to earlier prediction

    • New clinical use cases, including AI-supported diagnosis of drug-related conditions

    • Hybrid decision-making models, where responsibility is shared between humans and AI systems

    • Greater expectations for transparency, privacy, and auditability

     

    AI continues to reshape how critical life science work gets done.

    We can't wait to see what 2026 brings!