<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=489233&amp;fmt=gif">

AI: Upcoming regulation for pharma and medical devices

Published
Updated

The hype of artificial intelligence (AI) is revolutionizing industries such as pharmaceuticals and medical devices.

Regulatory agencies and organizations recognize the potential of AI in improving health outcomes. AI offers many possibilities by helping to improve processes, develop novel approaches, and transform data in ways that were not possible before.

Many organizations provide or use AI-based products or services, but are security, safety, trust, ethics, fairness, and transparency adequately addressed?

Not completely.

It is essential to address AI concerns to ensure ethical considerations, human rights, transparency, and accountability in AI development and deployment for health.

Organizations must be aware of their responsibilities and take the necessary steps to ensure that AI is used responsibly and safely. Governments must create and enforce these laws to ensure AI is used ethically and responsibly.

The responsible use of AI in pharma is being addressed by regulatory efforts. However, it has become increasingly difficult for regulatory agencies to keep up with AI technologies as they become more widely used.

This article reviews the upcoming regulation for pharma and medical devices that is intended to ensure patient safety and protect data privacy and security.  

 

Serious challenges of using AI

To ensure that manufacturers' AI models comply with existing legislation, AI guidance must address serious challenges and caveats. These include:

Infographic that shows the challenges of using AI in Life Sciences organizations | Scilife

 

Ethics in data collection

Using artificial intelligence to handle sensitive medical data raises privacy and security concerns. AI may also undermine individuals' autonomy and dehumanize patient care.

When automating personal data, manufacturers must comply with the General Data Protection Regulation (GDPR) in Europe and provide meaningful information about the logic involved in any automated decision taken by an AI algorithm.

The power granted to AI in the medical field could translate to life-threatening decisions, such as incorrect diagnosis, treatment, and intervention options, among many others.

The principle of autonomy requires that AI does not undermine human autonomy. This means that individuals must retain control over their medical decisions and actions in healthcare systems.


Cybersecurity threats

The integration of AI in pharma and medical devices raises concerns about the vulnerability of devices to cyberattacks. Privacy breaches can have serious financial and legal consequences.

Companies and healthcare providers must take steps to protect this data and ensure that it is secure. They must ensure that they have strong security measures in place and that their staff are properly trained to handle this data. They must also guarantee that patient information is kept secure and confidential.

Their systems must be regularly monitored and patched to make certain that any vulnerabilities are addressed quickly.


Lack of transparency

Many AI algorithms operate as black boxes, making it difficult to understand how they come to their conclusions.

In the absence of transparency, patients and healthcare professionals are left in the dark about trust, accountability, and the ability to explain medical decisions.


Risk of bias

The quality, distribution, and integrity of input data determine the output of AI algorithms. In this sense, it determines how well a device performs.

As a result, it is essential to identify and address any biases that may be present within the input data to ensure safe and effective AI applications.


Potential for misuse

Using medical AI tools incorrectly can lead to wrong medical assessments and decision-making, which could result in harm to patients.

Several factors can contribute to this misuse, such as the limited involvement of clinicians and patients in the development of AI, a lack of training in medical AI among healthcare professionals, and the proliferation of easily accessible online and mobile AI solutions without sufficient explanation and information.


Regulatory framework and standards

Advances in AI are outpacing regulations. To adapt existing regulations to AI innovations, regulatory authorities should develop clear guidelines and collaborate with stakeholders in the industry.



General initiatives in Artificial Intelligence

 

ISO/IEC 42001:2023

To ensure organizations stay ahead of the curve, the International Organization for Standardization (ISO) published ISO/IEC 42001:2023 Information technology — Artificial Intelligence — Management system (AIMS)  in December 2023.

It is a standard that provides guidelines for establishing, implementing, maintaining, and continually improving a structured approach for Artificial Intelligence Management Systems (AIMS).

The ISO/IEC 42001:2023 standard is the first of its kind in the world. It takes into account the challenges AI poses, such as ethical considerations, transparency, and continuous learning. The standard sets out a structured way to manage risks and opportunities associated with using, developing, monitoring, and providing products or services that utilize AI, balancing innovation with governance.

 

WHO Regulatory Considerations on Artificial Intelligence for Health

In October 2023, the World Health Organisation (WHO) published a paper titled Regulatory Considerations on Artificial Intelligence for Health outlining key regulatory considerations for AI for health, emphasizing the importance of establishing safety and effectiveness, making systems available, and fostering dialogue about using AI as a positive tool.

A list of 18 regulatory considerations was provided in the document as a resource for regulators. These considerations fall under six broad categories:

  1. Documentation and transparency
  2. Risk management
  3. Intended use and validation
  4. Data quality
  5. Privacy and data protection
  6. Engagement and collaboration

The purpose of these categories is to guide governments and regulatory authorities in developing new guidance on AI or adapting existing guidance so that AI is effectively regulated to maximize its potential in healthcare while minimizing its risks.



AI regulations in pharma and medical devices

Artificial Intelligence in pharma consists of algorithms, models, and techniques that are incorporated into computer systems to automate tasks and make decisions and predictions.

Data analytics is one of the key AI applications in regulatory compliance. Pharma and medical device companies generate enormous volumes of data, which cannot be managed with traditional methods.

With the help of AI algorithms, a great deal of data can be more efficiently analyzed, patterns can be identified, and meaningful insights can be drawn from it.

In comparison to humans, AI-driven tools provide a faster and more accurate way of scanning and analyzing data.

However, the pharma industry's regulators acknowledge that AI presents challenges and opportunities that need to be addressed. Regulators are undertaking ongoing regulatory efforts to address responsible AI use.

Let’s discuss the current status of AI regulations in the EU and the US.

 

Current status in the EU

As part of the Big Data Workplan 2022-2025, the Big Data Steering Group of the European Medicines Agency (EMA) published a draft reflection paper on the use of AI in the medicinal product lifecycle, in July 2023, that outlined current thinking on how artificial intelligence can be used to help develop, regulate, and use safe and effective medicines for humans and animals. Through this paper, the EMA opened a dialogue with developers, academics, and other regulators, ensuring that patients and animals could benefit from these innovations.

Afterward, in December 2023, the European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMAs) published an Artificial Intelligence workplan 2023-2028, to create a regulatory system that takes advantage of the capabilities of AI, while adequately exploring uncertainty and mitigating risks.

Lastly, in December 2023, the European Parliament and Council reached a political agreement on the European Union's Artificial Intelligence Act ("EU AI Act"). The AI Act is a milestone in AI regulation that sets the stage for other countries and regions to develop their own legislation.

With this initiative, the European medicines regulatory network remains at the forefront of utilizing AI in medicines regulation.



Upcoming regulations in the EU

After the publication of the EU AI Act in December 2023, the next steps involve its implementation in phases and the final enforcement that will balance innovation (the use of AI) with regulation.

Organizations will have to audit their artificial intelligence systems to determine their risk category (unacceptable risk, high risk, limited risk, and minimal risk) and make sure they meet the requirements.

Infographic that shows how to asses AI risks | Scilife

They will also have to be prepared to take immediate corrective actions in case of non-compliance and report any serious incidents to the designated national authorities.

It will be the member state's responsibility to enforce the EU AI Act throughout the EU. This will involve collaboration between national supervisory authorities and the proposed European Artificial Intelligence Board (EAIB). National authorities will be responsible for the day-to-day application and enforcement of the act. Their duties will include ensuring compliance, conducting checks, and imposing penalties. The EAIB will provide guidance and advice on the AI Act.

As a result, it will be necessary to establish mechanisms for monitoring AI systems and conducting compliance checks regularly, especially those that pose a high risk.

The ban on unacceptable risk AI systems will become binding six months after the EU AI Act enters into force, while transparency reports and risk assessments will become binding 12 months after the law is passed.

Future changes to the EU AI Act are likely. Organizations will need to stay on top of these changes and be flexible in compliance. In addition to regularly reviewing AI systems against current regulations, organizations will need to maintain open channels with regulators and invest in AI governance training.

The EU AI Act will become fully applicable by 2026, and member states will be able to enact the new rules at the national level, with some exceptions for specific provisions, that will be applicable 6 months after the EU AI Act approval.

Fines will range from 7.5 million euros ($8 million) or 1.5 percent of global annual turnover up to 35 million euros or 7% of global annual turnover, depending on the severity of the violation and the size of the company involved.


Current status in the US

At the time of this post, no comprehensive AI law has been adopted in the US. However,  the US regulatory authorities have already issued some guidelines and frameworks for using AI in the medical field.

Let’s discuss what is the current situation in the medical device sector:

In January 2019, the FDA published the Developing a software pre-certification program: A working model that provides a more streamlined and efficient regulatory oversight of software-based medical devices from manufacturers who have demonstrated a robust culture of quality and excellence.

In April 2019, the FDA published a discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” that describes the FDA’s approach to premarket review for artificial intelligence and machine learning-driven software modifications. Based on these recommendations, SaMDs can be developed, evaluated, and validated using real-world performance monitoring, algorithm change protocols, and quality assurance measures.

In January 2021, the FDA published the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, with the FDA’s commitment to supporting innovative work in the regulation of medical device software and other technologies, such as AI/ML. In this plan, the FDA outlines five goals:

  1. Tailored regulatory framework for AI/ML-based SaMD
  2. Good Machine Learning Practice (GMLP)
  3. A patient-centered approach incorporating transparency for users
  4. Regulatory science methods related to algorithm bias & robustness
  5. Real-World Performance (RWP)

This Action Plan indicates the FDA's intentions to update the proposed regulatory framework presented in the discussion paper on AI/ML-based SaMD, including through the issuance of a draft guidance.

Let’s discuss what is the current situation of artificial intelligence in pharma and biotech:

In 2023, the FDA released two discussion papers on AI use:

It is expected that the FDA will consider such input when developing future regulations.

Regarding the US administration, in October 2023, they issued an AI Executive Order that directed many U.S. executive departments and agencies to adhere to eight principles and evaluate the safety and security of AI technology and other associated risks. In this order, organizations were required to test and report on their AI systems.

A great deal of emphasis should also be placed on the Cybersecurity Modernization Action Plan released to enhance the FDA's ability to protect sensitive information and improve cybersecurity capabilities. The FDA is committed to ensuring that drugs are safe and effective while facilitating innovations, such as AI, in their development.



Upcoming regulations in the US

No specific regulatory guidance is applicable yet. However, the FDA has invited the public to comment on the discussion papers that have already been published. They are also engaging with stakeholders to improve their understanding of the implications of AI-based technologies on the development of drugs and medical devices.

The agency plans to finalize its regulatory framework in the near future.

 

 


Conclusion

The integration of artificial intelligence in the pharmaceutical and medical devices sectors presents unprecedented opportunities but demands careful consideration of ethical, security, and transparency concerns.

Regulations such as the EU's AI Act and US efforts, are crucial to balancing innovation with governance. The landscape is evolving rapidly, which calls for proactive collaboration, compliance auditing, and continuous learning.

The transformative potential of AI in healthcare is imminent, requiring a shared commitment to responsible adoption for the benefit of individuals and global well-being.

Discover how Scilife smart QMS can be your best ally to surf the wave of AI innovation in Life Sciences.