
In our past training session, “Making AI work for you: everyday wins in QARA”, life sciences expert Martin King taught us how to use AI in quality management to enhance quality processes.
The webinar was aimed at regulatory affairs and quality assurance professionals, who learned how to integrate AI into QARA (Quality Assurance and Regulatory Affairs) workflows and create their own digital twin to enhance their quality tasks, with real-world examples of AI usage in medical and pharma.
Why is it important to comply with the EU AI Act?
Before we even start digging into how to use AI in quality management, it is important to answer the question, why is it important to comply with the EU AI Act?
Martin started off by displaying an AI Literacy Statement to comply with the EU AI Act, as he had used AI to help him create the presentation. He had also given permission to his digital twin to use his face and his voice.
“Even though many companies don't use this, it is obligatory under the EU AI act”.
AI affects many areas of our lives, from using face recognition to feed us personalized ads, all the way to being used in life-saving medical diagnostics and treatments. Then, it is very important to have legislation to regulate its usage, making sure that it is always used in a positive, safe, and beneficial way, protecting end user privacy and wellbeing.
Officially designated as Regulation (EU) 2024/1689, the EU AI Act is the first comprehensive framework designed to regulate the use of AI in Europe. Its aim is to address risks associated with AI, ensuring its safe and transparent use and development.
The EU AI Act comprises three risk categories:
- #1 Unacceptable risk: Systems and apps that create an unacceptable risk, such as Chinese government-run social scoring, are banned in the EU.
- #2 High risk: High-risk apps, such as CV-scanning tools that rank job applicants, are subject to specific legal checks.
- #3 Limited and minimal risk: Applications not classified as high-risk or unacceptable fall into limited or minimal risk categories. Limited risk applications must meet transparency obligations. Minimal risk applications are not subjected to specific regulatory requirements.
Using AI in quality assurance: about EU AI Act enforcement
The EU AI Act officially entered into force on August 1, 2024. However, its various provisions are being rolled out over 36 months.
Some initial obligations, like those related to prohibited AI practices and AI literacy, began on February 2, 2025. Other provisions, such as those concerning general-purpose AI models and high-risk AI systems, will be enforced later, with deadlines extending into 2026 and 2027.
Recommended learning: Don’t miss these upcoming AI regulations for pharma and medical devices.
Fines applied to the misuse of AI in life sciences
On August 2, 2025, the European Union’s AI Act enforcement begins, bringing significant regulatory implications. One of the most critical aspects is the introduction of substantial penalties for non-compliance. Companies that engage in prohibited AI practices can face fines of up to €35,000,000 or 7% of their global annual turnover. These are not just theoretical risks; they are enforceable and will apply across the EU.
For other types of violations that are still considered serious, but not explicitly prohibited, the penalties are lower but still considerable: up to €15,000,000 or 3% of the company’s global turnover. Regardless of which tier the violation falls under, these fines represent a serious financial and reputational risk for any organization.
Legal aspects of AI in quality management and regulatory affairs
As Martin continued explaining in his talk about how to implement AI in quality management, while AI doesn’t inherently cause harm, it can introduce hazards that may potentially lead to harm.
It requires a risk-based approach, which differs from those typically applied in conventional regulatory affairs and risk management.
In AI systems, “hazards” refer to potential issues or failures that might occur; "p" values refer to the probability of a hazard occurring. Separately, the probability of harm must also be evaluated, depending on the specific hazard.
This is an important aspect of using AI in quality assurance in the life sciences, since the medical and pharmaceutical sectors typically fall under high-risk AI categories, especially under the EU AI Act. Even tasks that seem administrative, such as compliance checking or evaluation, are considered high-risk due to the sensitivity and consequences involved.
This classification demands rigorous transparency, risk management, human oversight, and, in the highly regulated context of life sciences, AI-specific regulations.
ISO 14971 for medical device risk management
As an example of legislation around AI in quality management, let’s look at AI regulations for medical device quality management systems.
When assessed within the ISO 14971 framework, AI-specific considerations are addressed in the accompanying document, ISO/TR 34971.
ISO 14971 is the main international standard for applying risk management to medical devices, providing a structured process for manufacturers to:
- Identify hazards
- Estimate and evaluate risks
- Implement risk controls
- Monitor the effectiveness of those controls
To design: Risk is often defined as the combination of:
- The probability of a hazardous situation occurring (e.g., system failure or misclassification),
- The probability that the hazard will lead to harm, and
- The severity of that harm.
Under Annex III of the EU AI Act, AI systems used in medical devices and pharmaceuticals are explicitly classified as high-risk, particularly those influencing diagnosis, treatment decisions, or compliance processes.
Note that AI models, especially those capable of continuous learning, must be monitored and reassessed after deployment.
This reflects the view that risk management is a continuous lifecycle activity, not a one-off assessment, in line with both ISO/TR 34971 and the EU AI Act’s requirement for ongoing oversight.
ISO/TR 34971 for AI-specific guidance
ISO/TR 34971 is a technical report that complements ISO 14971 by offering guidance on its application to machine learning and AI-enabled medical devices.
It addresses specific AI challenges such as:
- Continuous learning systems (non-fixed behavior over time)
- Lack of explainability (black-box models)
- Probabilistic outcomes and scoring (instead of deterministic behavior)
- Dynamic risk profiles that change post-deployment
Understanding GPT: The foundation of AI in quality management
An AI GPT includes the popular ChatGPT, Google Gemini and DeepSeek, all used to leverage the power of AI in quality management. The acronym GPT means “Generative Pretrained Transformer” and refers to a specific methodology:
- It is generative, meaning that it can create something new, like text, images, code, etc.
- It is pretrained. It has context and receives information in advance, on a vast data pool.
- It is a transformer, turning complex language tasks (prompts) into a specific output.
“That last part is what makes us think of AI “intelligent', whilst in reality it is a very clever pattern recognition system: It needs the context, the surrounding information in order to match it to all the information it's received in advance. Then it can give feedback, in an almost human-like way.”
Martin King, Regulatory Affairs & Quality Assurance expert
The GPT task analysis model explained
AI GPTs work through task analysis, which can be broken down into four components: perception, cognition, action, and feedback.
Martin gives us an example: climbing a mountain.
These four steps form a loop that helps guide our behavior and adjust our actions based on results.
In a similar manner, GPTs interpret inputs, such as questions or prompts. However, they lack human cognition; they simply generate an action in response to a prompt.
What’s missing is a feedback loop. GPTs generate an output but don’t evaluate or adapt it. It’s the user’s role to judge whether the answer is good or bad and adapt it as needed.
Use of AI in life sciences: How can AI help in quality management?
Maintaining high standards of quality in life sciences is key, which is why many QA professionals started wondering how to use AI in quality management as a way to revolutionize organizations’ quality processes.
So, how can AI help in quality management? Here are a few examples:
By using AI algorithms and AI-powered data collection, organizations can streamline testing, inspections and other essential quality management procedures. By analyzing vast amounts of data instantly, AI enables real-time decision making, and reduces manual effort.
AI-enhanced monitoring systems also make continuous improvement achievable by easily identifying patterns, trends, anomalies, or deviations, and opportunities. By using machine learning algorithms and historical data, organizations can also implement preventative measures. Regulatory tasks can also be improved with AI-powered contextual mapping.
As a result of the practices mentioned above, here are some benefits of using AI in regulatory affairs and quality management:
- Improved efficiency: a faster and automated way to implement and revise quality processes, leading to decreased time and costs without compromising quality.
- Enhanced quality control: by automating processes, uncovering blind spots, enabling continuous, and improving employee training.
- Faster time to market: elimination of disconnected manual processes, reduction of errors and blocking issues, AI minimizes unnecessary delays to market.
An example of AI in quality management: AI-powered FMEA generation based on ISO 13485
In this article, we’ll highlight one use case that Martin brought up during the presentation as an example of AI in quality management: using AI to support the creation of FMEAs (Failure Mode and Effects Analyses) aligned with ISO 13485.
Clause 7.5.6 of ISO 13485 involves validating a sealing process for ensuring consistent results. In a real scenario, a QA professional may have 100+ rows in an FMEA to complete, each needing risk evaluation and mitigation suggestions.
Here’s how the process can be made faster with GPT:
This enables rapid iteration and provides a solid human-AI collaboration model: the AI suggests; the human reviews and validates.
What is it and how to make my own digital twin
Are you wondering, “how to make my own digital twin?” A digital twin is a virtual representation of a real-world object or system.
It mirrors something that exists physically, allowing us to simulate, monitor, and interact with it in a digital environment. This concept is not new. In fact, NASA has been using digital twins for decades.
In a production environment, we can build a digital model of an entire production line. By feeding real-time data into the model, we can monitor performance, spot trends, and proactively address deviations.
Essentially, digital twins take statistical process control to the next level: automated, real-time, and dynamic.
“On a more personal scale, this is how I’ve made my own digital twin. I’ve used digital twins to enhance communication and training using a tool called AI Studios”, explains Martin.
FAQS about AI in quality management
Do you have any tips on choosing the most suitable AI tool?
How do I validate an adaptable AI system used within a QMS?
What are the IT security tools for protecting company privacy when using AI?
What key points should QA professionals consider when using AI in their tasks?
Final thoughts on using AI in quality management
AI in quality management is becoming a powerful tool for quality professionals, boosting speed, accuracy, and efficiency across tasks like compliance, risk detection, and testing. In areas such as medical diagnostics, it can even support life-saving decisions.
However, to successfully reap the benefits of using AI in regulatory affairs and quality management, human oversight is essential. AI can assist, but expert judgment is needed to validate and guide its output.
The same goes for technologies like digital twins, which enhance training and communication but rely on human insight to be effective. Lastly, in regulated fields like pharma and med tech, aligning with compliance standards is critical, not just to avoid legal risks, but to ensure patient safety, reinforcing the indispensable role of human responsibility.