AI in Quality and Compliance

Life Science Consultant

05/21/2025

Artificial Intelligence (AI) is transforming the way regulated industries approach Quality Assurance (QA) and Compliance. This article lays out the foundational concepts of AI, including machine learning (ML) and deep learning (DL), and explores their application in quality management systems (QMS). It highlights how AI can be fine-tuned to align with company-specific SOPs and regulatory frameworks, addresses key integration challenges, and outlines the strategic value KVALITO brings in delivering validated, intelligent automation solutions in GxP environments.

Background

In today’s digital era, the technological landscape isn’t merely shifting—it’s advancing at a pace that’s both thrilling and occasionally intimidating. This rapid transformation is fueled by breakthroughs, discoveries, and synergies, with Artificial Intelligence (AI) emerging as a pivotal driver.

As AI technology progresses, this evolutionary trajectory is set to accelerate exponentially, thanks to the vast swathes of data AI can tap into. Furthermore, its intrinsic ability to learn and refine itself—anchored in deep learning principles—underscores its potency.

Currently, there’s a narrow scope of tasks beyond AI’s reach, but with ongoing technological advancements, these boundaries are poised to expand continually.

While the evolution of this technology may be daunting for some, history shows that groundbreaking advancements bring fresh opportunities, enhanced lifestyles, and new roles. Embracing and adapting our personal and professional lives to these changes is key. Progress is inexorable; we can either prepare to harness its benefits or risk being left behind.

 

Understanding the Foundations of AI
 1. What is AI, ML, and DL?

AI is the discipline that seeks to create machines capable of mimicking human cognitive functions, such as problem-solving or decision-making. It is a broad domain of computer science focused on creating systems capable of performing tasks that normally require human intelligence. These tasks can range from simple ones, like recognizing patterns, to complex ones, like composing music.

AI is the umbrella aiming to make machines “smart”, based on several subsets like Machine Learning (ML), Deep Learning (DL), Neural Networks, etc. Let’s dive deeper into each of these terms and how they relate:

  • Machine Learning (ML) is a subset of AI. It involves creating algorithms that allow systems to learn from data. Instead of being explicitly programmed to perform a task, an ML system uses data to build a model that can make predictions or decisions.
  • Deep Learning (DL) is a subfield of ML (alongside others like Supervised/Unsupervised Learning). The main difference is the use of neural networks with three or more layers. These networks attempt to simulate the behaviour of the human brain—allowing it to “learn” from large amounts of data. While a neural network with a single layer can make approximate predictions, additional hidden layers can refine accuracy.
  • Neural Networks are foundational to DL. They consist of interconnected nodes or “neurons” that process data in layers. Inspired by the human brain, they are crucial for enabling deep learning models to recognize patterns and make decisions from large datasets.

Here is a diagram to visually show the relationship between these concepts:

Figure 1: Diagram of AI structure, Copyright KVALITO Consulting Group 2025

2. How AI Learns
  • Basic training, often referred to as pre-training, involves training a machine learning model on a large and general dataset. This process enables the model to learn broad patterns, features, and representations. It gives the model a strong foundation by capturing generic features from diverse data.
  • Fine tuning takes a pre-trained model and further trains it on a smaller, more specific dataset. This adapts the general features learned during basic training to the specific requirements of a particular task, improving performance on specialized datasets.

 

3. Neural Networks: The Learning Engine

The scientific foundation of neural networks is inspired by the functioning of neurons in the human brain. However, despite the terminology and inspiration, artificial neural networks are simplified mathematical models and do not represent the full complexity of biological neurons.

The perceptron (also called node or neuron today) is one of the foundational structures in neural networks. It takes multiple inputs, multiplies them by specific weights, sums up the products, and then passes the result through an activation function to produce an output.

𝓍𝔦 : the inputs.

𝒲𝔦 : the weights.

b : the bias – shifts the output when all inputs are zero and enhances non-linearity

f : the activation function (e.g., step function, sigmoid)

The node “trains” by adjusting weights and biases in response to errors in predictions. The goal is to find a combination of weights that allows the model to make accurate predictions. Initially, weights are set randomly and then updated based on errors through an iterative process. A single node can only separate data that is linearly separable, i.e., data that can be divided using a single straight line (in two dimensions). This means it cannot handle non-linear or complex problems, like the XOR gate (Exclusive OR logic gate).

However, by combining multiple nodes into multi-layer neural networks, non-linear problems can be addressed. This concept led to the development of multi-layer feedforward neural networks. For deeper insights into the difference between Linear and Non-Linear networks see Appendix 1.

But how, practically, an AI can generate a complete answer to any given question using the above principles? To answer this question we propose a simplified, practical, example in Appendix 1.

Figure 2: Deep Neural Network made of several layers, Shutterstock-Illustration ID 767322589

Practical Applications in Quality Assurance and Compliance

In today’s pharmaceutical landscape, significant investment is made in Computer System Validation (CSV) to ensure quality. While processes like SOPs and WPs exist, individual interpretation can lead to inconsistency.

A well-trained and validated AI, aligned with company standards, could become a valuable reference. Business teams could consult it for guidance, enhancing consistency and compliance.

  • AI can recognize patterns, suggest compliant alternatives, classify systems, and propose optimal deliverables.
  • Initially, a QA review might still be required—but AI will streamline processes and improve quality.
  • Some AI systems can already populate Excel sheets, generate images, and code.

A tailored AI trained on a company’s QMS—its SOPs, templates, and forms—could autonomously draft documentation. It would interpret business input and translate it into complete, compliant documents.

With rigorous validation, the AI could become as trustworthy as a human QA expert.

Architecture:

  • Base AI: Pre-trained (e.g., OpenAI models)
  • Fine tuning: Industry standards (GAMP5, ISPE, 21 CFR, EudraLex)
  • RAG (Retrieval-Augmented Generation): Company-specific QMS (SOPs, templates, forms)
  • Ongoing: Regular updates based on evolving regulations

Eventually, different modules trained on other business areas may be added, forming a 360° solution.

 

Challenges of AI Integration

Integrating AI into business domains can pose various challenges:

  • Fear of Job Displacement: many workers fear that AI will replace their jobs. This concern is not entirely unfounded as automation has historically displaced certain job functions, but with every new technological progress also came up new roles, professions, and opportunities able to provide employment to an always larger population.
  • Mistrust in Automation: There’s a general scepticism about letting machines make decisions, especially when those decisions have significant consequences.
  • Regulatory and Compliance Challenges: Validation by Authorities. In regulated industries (like healthcare, finance, security, etc.), any tool or system, including AI, must meet stringent standards and be approved by regulatory bodies. Its performance must be deeply evaluated and kept under control.
  • Data Privacy and Ethics: As AI systems often require large datasets, there are concerns about data privacy and how data is used. Regulations like the GDPR in Europe have strict requirements for data usage and protection.
  • Initial Investment: Deploying AI solutions might require significant initial investments in terms of technology, expertise, and data. This cost is nonetheless foreseen to be compensated by the reduction of employees currently dedicated to these activities which will be then automatized.
  • Continuous Learning Requirement: The field of AI is rapidly evolving. Businesses need to invest in continuous learning and training to keep up.
  • Collaboration between AI and Humans: It’s essential to design workflows where AI augments human tasks rather than simply replacing them.
  • AI Ethics: There are broader concerns about the ethical implications of AI, including surveillance, decision-making autonomy, and more.

 

AI Use Cases in Validation and Quality

Roles Supported:
  • Validation Manager
  • QA & compliance advisor
  • Project Manager
  • Validation SME (VSME)
Core Validation Processes:

AI can support or automate tasks across the full CSV and CQV lifecycle:

  • CQV Planning & Strategy
  • Validation Master Plan (VMP) development
  • GAP Assessment and remediation tracking
  • cGMP Reviews for new or updated systems
  • User Requirement Specification (URS)
  • Validation Plan (VP)
  • Functional/Configuration/Design Specifications (FS/CS/DS)
  • Risk Assessments (e.g.,System level, FMEA, hazard analysis)
  • Factory & Site Acceptance Testing (FAT/SAT)
  • Design Qualification (DQ)
  • Installation, Operational, Performance Qualification (IQ/OQ/PQ)
  • Traceability Matrix generation and updates
  • Test and Validation Report drafting
  • Quality Live Agent
Quality Risk Management & Process Optimization:
  • Process Standardization across systems and projects
  • Continuous Improvement insights based on historical validation data
  • Process Validation lifecycle support
QMS & Data Integrity:
  • Company QMS Review and Optimization
  • Data Integrity Assessments (e.g., 21 CFR Part 11, Annex 11)
  • Cross-check of SOP alignment and consistency
Supporting QA and Compliance Activities
  • SOP Generation, Updates, and Review
  • Corrective and Preventive Action (CAPA) Management
  • Training Program Support (tracking and content generation)
  • Regulatory Assessment & Change Impact Analysis
  • Vendor Qualification & Management
  • Audit Preparation & Documentation Retrieval 

Materials for Training Compliance AI

International Quality Standards (General QMS and Risk Management)
  • ISO 9001:2015 – Quality Management Systems
  • ISO 13485:2016 – Medical Devices — Quality management systems — Requirements for regulatory purposes
  • ISO14971:2019 – Medical Devices — Application of risk management to medical devices
  • ISO 27001:2013 – Information Security Management

 

Industry Guidelines and Best Practices
  • ASTM E2500 – Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment
  • ISPE Baseline Guide: Commissioning and Qualification (2nd Edition)
  • ISPE Good Practice Guide: Applied Risk Management for Commissioning and Qualification
  • ISPE: A Risk-Based Approach to GxP Compliant Laboratory Computerized System
  • GAMP 5 – A Risk-Based Approach to Compliant GxP Computerized Systems (ISPE)

 

Regulatory Requirements
  • 21 CFR 210 – Current Good Manufacturing Practice (CGMP) Regulations
  • 21 CFR 211 – Current Good Manufacturing Practice for Finished Pharmaceuticals.
  • 21 CFR 600 – Biological Products: General
  • 21 CFR Part 11 – Electronic Records, Electronic Signatures
  • EudraLex Volume 4 – EU Good Manufacturing Practice (GMP) guidelines

 

Pharmacopoeias and Global Guidance
  • US Pharmacopeia
  • European Pharmacopeia
  • ICH Q9 – ICH guideline Q9 on quality risk management

 

Value Proposition from KVALITO

KVALITO can support organizations through:

  • Providing pre-trained AI models fine-tuned on industry standards and regulations
  • Implement RAG embedding company specific procedures
  • Validating AI performance within your infrastructure
  • Offering hypercare and post-deployment support
  • Ensuring continuous updates based on evolving standards

Our approach bridges compliance, innovation, and operational excellence.

 

Appendix 1: How AI Thinks: Inside a Neural Network

At the foundation of neural networks is the ability to discern and classify data and, through a probabilistic calculation, provide an answer to a problem, a question, or a situation that is presented. In simpler terms, it assigns a weight and a bias to every received input and generates an output by applying an activation function to the result of the weighted sum of the single perceptron.

; f is the activation function, like Step, Sigmoid, ReLU, etc.

The output of the activation function is exactly the node’s output. This can either determine the answer—if it’s the output of the last layer of the neural network—or serve as input for another or several other nodes if it’s an output from the first layer or any hidden (intermediate) layer.

Practically speaking, every word that forms a question (or any other data introduced into the network as input, depending on what the AI is analyzing) is converted into numerical vectors. These vectors are representations of the “language” on which the model has been trained. For instance, the question, “What is a cat?” can be broken down as:

What -> [0.1; 0.3]

is -> [0.2; -0.1]

a -> [-0.1; 0.4]

cat -> [0.9; -0.5]

? -> [0.0; 0.1]

It is important to note that the values of these word vectors are established during the AI’s training and are referenced from a table. Thus, the AI will always have these reference vectors for these words. As previously described, each of these inputs (each vector) is processed by the node, which assigns a weight vector to each input.

Now, focusing only on the “cat” input [0.9; -0.5], the perceptron might have a related weight vector like [0.5; -0.2]. The weighted sum calculated by the perceptron is then:

z= (0.9×0.5) +(−0.5×−0.2) +0.1=0.45+0.1+0.1=0.65 where 0.1 is the attributed bias.

This weighted sum then becomes the argument for the activation function. Using, as an example, a sigmoid activation function

the generated output (y) would be approximately 0.657. This process is repeated for every word that makes up the input question to obtain an output value for the entire sentence. At this point, the model consults a probability table, derived from its training, to find the most probable next value—for example, 0.7—which corresponds to the word “IS” in the table. This output, consisting of the input plus the new word “IS”, is then used as input for another node in the network’s subsequent layer. In this manner, another word contributing to the most probable answer is added. By the end of this process, the final layer will produce the answer: “A cat is a mammal.”

Of course, this process is highly intricate, involving many iterations and mathematical procedures, and complex AI does not work on single words but rather on a set of sentences, but the basic working principle is roughly as described.

Linear and Non-linear Models and Complex Systems

What precisely is the role of the activation function, and what’s the significance of having linear or non-linear activation functions?

Using linear activation functions, due to the inherent properties of linear functions, will always result in a linear output, regardless of how many iterations are performed in the model. This makes having multi-layered neural networks somewhat redundant.

A linear activation function can only classify data separable by a straight line (in one dimension) or a plane (in two dimensions). It might suffice to discern between a fruit and an animal, but in more complex situations, where some factors overlap between two (or more) categories being analyzed, clear classification isn’t possible. For example, it’s not possible to classify a grape relative to an apple and a banana-based on the input: “Which fruit is crispy and sweet?” because both apples and grapes are crispy, while grapes and bananas are sweet. This overlapping trait makes it impossible to classify using linear functions.

To illustrate this with another example, consider a frog. If we try to classify animals based solely on their habitat, the frog poses a unique challenge. While frogs start their lives in water, they later transition to semi-aquatic or terrestrial habitats as adults. A linear model might struggle to classify the frog properly since it can’t be strictly placed in a single habitat category based on just one trait. This shows the need for more complex, non-linear models to grasp such nuances.

Considering two linear functions:

The combined function becomes:

which is still a linear function.

A non-linear function, like the sigmoid, can “warp” data space, facilitating better classification. Iterating through several non-linear functions can emulate an even more complex function that can classify and discern intricate relations between data. Given the sigmoid as a non-linear function:

Applying it to other sigmoid functions can produce:

Which remains non-linear and can represent intricate patterns in the data.

For AI to process complex systems, non-linear functions are essential. They allow the system to “learn” patterns that aren’t immediately obvious or are intertwined in intricate ways, making classification more nuanced and adaptable.

Figure 3: Case A is a linear separation which is not possible to be achieved in Case B where only a non-linear curve can efficiently separate (classify) the data

Introducing non-linearity into a system, such as an AI model, results in a complex system. The term “complex” shouldn’t be mistaken for “complicated” or “chaotic”.

  • Complicated system is one made up of many linear components. Though it might be challenging to decipher, it is ultimately comprehensible given enough analysis.
  • Complex systems are characterized by non-linear interactions among their components, where small changes in input can lead to disproportionately large or unexpected outputs. These interactions give rise to emergent behaviour—patterns or functions that cannot be predicted by analysing individual parts in isolation. While such systems often appear unpredictable or random, many are fundamentally deterministic, meaning their evolution is governed by underlying rules, even if outcomes are highly sensitive to initial conditions (as in chaos theory). The probabilistic nature of quantum mechanics has raised philosophical questions about determinism at the smallest scales, but in most practical contexts, complex systems are modelled classically and are not directly affected by quantum uncertainty. Furthermore, many complex systems are adaptive, capable of evolving in response to external stimuli or internal changes.
  • Chaotic systems, however, are primarily characterized by their sensitivity to initial conditions. Even minute differences at the start can drastically alter the system’s trajectory, a phenomenon popularly known as the “butterfly effect”. While predicting their long-term behaviour is arduous due to this sensitivity, short-term predictions can still be made.

Like complex systems, many chaotic systems — at least those modelled in classical physics — are deterministic. They evolve according to fixed rules without involving inherent randomness. However, their extreme sensitivity to initial conditions makes accurate long-term prediction practically impossible, since even minute measurement errors or rounding can rapidly lead to divergent outcomes. While this unpredictability is not due to randomness in the system itself, some have debated whether quantum-level uncertainty might set a fundamental limit to prediction. In most real-world chaotic systems, however, such quantum effects are negligible.

In essence, both complex and chaotic systems challenge our predictive capacities. However, complexity is typically tied to the unpredictable interactions and adaptability of system components, whereas chaos is associated with initial condition sensitivity and long-term unpredictability.

The main learning mechanism for a neural network is based on the Backpropagation, which consists of an optimization algorithm used for minimizing the error by updating the weights, in the opposite direction of the gradient to minimize the error. It works by calculating the gradient of the loss function concerning each weight by applying the chain rule.

References

  1. GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems, 2nd Edition. International Society for Pharmaceutical Engineering, 2022.
  2. Baseline® Guide: Commissioning and Qualification, 2nd Edition. International Society for Pharmaceutical Engineering, 2019.
  3. Good Practice Guide: Applied Risk Management for Commissioning and Qualification. International Society for Pharmaceutical Engineering, 2011.
  4. Good Practice Guide: GxP Compliant Laboratory Computerized Systems. International Society for Pharmaceutical Engineering, 2012.
  5. ASTM International. ASTM E2500-13: Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment. ASTM International, 2013.
  6. International Organization for Standardization (ISO). ISO 9001:2015 – Quality Management Systems – Requirements. ISO, Geneva, 2015.
  7. International Organization for Standardization (ISO). ISO 13485:2016 – Medical Devices – Quality Management Systems – Requirements for Regulatory Purposes. ISO, Geneva, 2016.
  8. International Organization for Standardization (ISO). ISO 14971:2019 – Medical Devices – Application of Risk Management to Medical Devices. ISO, Geneva, 2019.
  9. International Organization for Standardization (ISO). ISO/IEC 27001:2013 – Information Security Management Systems – Requirements. ISO, Geneva, 2013.
  10. S. Food and Drug Administration (FDA). Title 21 Code of Federal Regulations, Parts 11, 210, 211, and 600. U.S. Government Publishing Office.
  11. European Commission. EudraLex – Volume 4 – Good Manufacturing Practice (GMP) Guidelines.
  12. ICH Q9: Quality Risk Management. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), 2005.
  13. United States Pharmacopeia (USP). United States Pharmacopeia and National Formulary (USP–NF). The United States Pharmacopeial Convention.
  14. European Directorate for the Quality of Medicines & HealthCare (EDQM). European Pharmacopoeia. Council of Europe.

Author

You May Also Like…

BioTechX Europe Event 2024

BioTechX Europe Event 2024

We are excited to announce that KVALITO Consulting Group will be a Silver Sponsor of BioTechX Europe 2024. As Europe’s...

0
Would love your thoughts, please comment.x
()
x
Your Privacy

Any website you visit may use cookies to store or retrieve personal information about you. Data stored or retrieved may be about you, your preferences, or your device, and it is used for the purposes specified in the cookies section below. When you visit this website, KVALITO AG is the data controller for your data processed through our cookies. Furthermore, some of the cookies we use are from (and controlled by) third-party companies, such as Google Analytics, YouTube or Linked in Analytics, Instagram, for example. They provide us with web analytics and insight into our sites. You can accept or decline cookies based on your preferences by defining each cookie category. Accepting cookies activates the functionalities described in the cookies category while refusing cookies disables such functionalities. In addition, you set which types of cookies you accept or not, and you can withdraw your consent at any time by changing your preferences in our cookie consent manager. To learn more and change our default settings, click on the various category headings. For more information, please see our Cookies Policy.