<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=489233&amp;fmt=gif">
  • Book your Demo
AMA (Ask Me Anything) Webinar 

Quality Systems, Computer System Validation (CSV), and Data Integrity with Yves Dène

HubSpot Video
Agenda

00 : 02 : 50
Yves Dène introduces himself

00 : 05 : 30
Q&A

01 : 00 : 00
Closure

An interactive Q&A webinar with Yves Dène, Knowledge Manager at QbD, quality systems expert, lead computer systems validation consultant with a demonstrated history of working in the pharmaceuticals industry.

Yves answers all your questions about how to reach an embedded quality system and ensure data integrity into Life Sciences.

Q&A's from the session

Q: How to handle conflict between 21 CFR Part 11 and GDPR

A: You're an employee at a company,  you are performing tasks for your employer, and normally you will have signed a contract before you start working there.

And then we have to look at the types of data, privacy data that we talk about. Now, all the actions that you do in your role, in the company, are not subject to GDPR. So if I signed let's say a validation plan today, when I leave the company, I cannot ask to be removed from that validation plan.

What will you find there? My name, a date and a signature. That's not GDPR data. However, of course, everything that is related to your personal data, for example, your date of birth, your social security number, your credit card number, etc. those types of data, everyone is allowed to remove. Yes.

So it's a bit double. But there is often a misconception about what I do for the company itself. It's not GDPR. It does not have to be removed it's legally.

Second case scenario. There is the clinical trials area where you have clinical data. That's, of course, quite a bit more complex issue.

There are also quite some experts in that domain because it often can be a bit grey area, but if you take part in a clinical trial, you will start with signing a consent. And in that document, there will be a detail stipulated which GDPR aspects are and which you signed for. You should never forget if you're part of a clinical trial.

You might get administered experimental drugs that might provide some side effects, longer term. And it's in your own interests that they keep at least your name. And in the case that you have been administered a certain experimental drug, I would say that the legislation around clinical trials prevails on the GDPR, although all non-essential data can be asked to be removed.

Check this article about "How to handle conflict between 21 CFR Part 11 and GDPR" for more information.

 

Q: What are the major risks with a cloud-based system versus an in-house built solution?

A: Of course there are always risks, whether you have it onsite, whether you have it at a supplier, the risks are there all the time. But indeed they may be different. Now if you go to a software as a service provider, there are two types.

There are those vendors that are specifically directing their entire program or part of their product portfolio to pharma. And you have those who do not. With the first type, it will be of course, a bit easier because they know how to work. They know how life sciences require specific compliance tasks to be fulfilled, to follow certain regulations.

Sometimes you need something new in your area and you will have to step out and go to a partner that is not specifically into life sciences. Then it will be key that when you draw the contract, when you start creating the service level agreement, to make sure that all the required aspects of working together will be covered in both the contract and the service level agreements.

That's the shift from having your applications on site, where of course you need to have an IT infrastructure department or service that will take care of all your security aspects or going to a cloud provider. Usually today, when we do a validation effort where a cloud provider or software as a service is involved we do start with that vendor selection and a vendor assessment.

From a software development perspective you shouldn't forget that effort, that it requires a company to build his or her own tools.

The question would be, what is the core business of the company? Is it really building your own software tools or probably it's not? Finding the right partner for that might be less risky than trying to do that in house. The reason being that even if you're a medical device company with a very decent or great self development team to develop the softwer part of your medical device, you probably don't want to move away your team from developing your product and do this side project of building whatever internal tool the company might need.

You probably should have them focused on the product that they are building. Also, there are often people who can develop software solutions, but it is typically a department that needs to make sure that everything keeps running. If they do it, what can happen is that if somebody built something within the company and that person leaves you lose the know-how of what was built.

It's tricky because you're typically not a software company. So it's built by some persons, and then it's somehow not continued building a tool.The tool needs to be built according to some user requirements, but then it goes into a whole life cycle. It needs, it needs upgrades.

It needs to be revalidated. It needs to a whole effort to, to keep something up to date as software. So that is definitely something that you should add to your assessment if building in-house is the right choice for your company.

 

Q: There are any CSV tests necessary to repeat once a cloud-based system is in production?

A: Yes. But you can ask the same question for in-house built systems as well. The answer will differ. I think now if you talk about CSV tests, there are several layers of tests.

If you look at the traditional terminology, we speak about an installation qualification. There used to be operational qualification, which means actually the technical testing in the past, because I don't like the term performance qualification. These days we tend to use much more user acceptance testing.

These are three types of testing. The first one  installation qualification is quite important. You will usually at least execute it twice. You will execute it before you start testing in the validation environment. And if everything ends up successfully, you will perform the same installation qualification in your production environment.

It will prove that what you have tested in the validation environment, all the settings and the technical setup will be reproduced in the production environment, meaning that the proof that you have coming from your tests is valid.

Secondly, you have the technical testing. I would never do any technical testing in the production environment.

There is a big debate about user acceptance testing. Should I repeat that in production? If you ask me: No. What do you need to do in your production environment, whether it's locally or cloud-based? You need to repeat your IQ. And part of the IQ will be to check that all the settings are correct, and that if you have migrated data or if you have uploaded initial data to start your application with, to verify that it is there. What I would not do is to repeat user acceptance testing.

Why? Because you will be creating dummy data in your production environment and it will either stay there or you will have to remove it. It will create risks. 

 

Q: How does Scilife performs Computer System Validation?

A: What we do is a full GAMP 5 validation from the validation plan until the validation summary report and everything in between. We write the URSs, the functional specs, design specs, the PQTs. And we execute everything as if it were a custom built application for ourselves.

That whole validation documentation package is an offered to our customers who can then use it as a basis to do the last validation steps on their end. So we really try to take away much of the effort and the cost of all our customers as opposed to having each customer validated on their own. That would be a duplicate effort because they are all using the same application in the cloud, obviously.

So it's much more efficient if we do that initial validation on our end, offer you the full validation documentation package and you can then do the last steps on your end. We can provide support if you are struggling with these last steps.

We have different modules, specific solutions within the platform with very strict quality standards: classic quality management, but also other modules. And every module has its own version and is validated separately. So it's really like different applications that integrate with each other, but then under the same roof, why? Because not everybody uses the same module. You can really just select the modules that are interesting for your company. Some customers have three or four modules, others use six or seven. So if you are not using one of the modules that is being upgraded, if we would have one version for the whole Scilife software suite, you would have to do validation steps, which do not make sense for your company.

That's why we have developed the whole system very vertically and do different modules. Every module has its own version and every module is validated separately. And then you only have to do additional validation steps whenever we do an upgrade of that specific module.

 

Q: When data and configuration changes are transferred from tests to validation to production environments. Is this process validated in Scilife?

A: We indeed provide our customers with three environments, a test environment, a validation environment, and a production environment.

The test environment is meant for really testing the features. You can create as many users as you want. You can create as many items and just test workflows and all that. The validation environment is meant to properly do the last validation steps in a separate environment, to not do that in production and pollute your production environment with dummy data.

We always advise to use the validation environment only to do the last validation steps. And then the data is really uploaded or configured in production directly. There is normally no transfer of data from validation to production. Therefore we do not need to document or to validate this process because we advise not to do it.

We offer the possibility to import all your QMS data from any legacy system or if you have it in word documents and Excel to actually convert that and upload that into our production environment. We of course do that for our customer.

So we need to automate that process. It can be thousands of documents or data entries. So there, we obviously document the process of importing that data into production.

 

Q: What is Computer Software Assurance (CSA)?

A: I would shortly like to go back to the beginnings of computerized system validation. CSV really came to existence in late eighties, nineties, but it became something of actual value at the time when 21 CFR part 11 was published, which was if I'm not mistaken in August 1997. Because computerized systems were used till that date by life science companies in United States to validate their system.

So CSV became mandatory at the time. In the beginning it was all quite unclear and companies interpreted it like how we have to test everything. We will never stop putting documentation together. And at the time, companies were creating closets full of binders on paper, because you could not put it electronically because you couldn't sign it electronically.

I've been in companies where there were warehouses full of validation binds. During those years there was already a tendency to move into a direction, which I like particularly, which is pragmatism. You have to be pragmatic. Otherwise you will focus your efforts on writing documents.

In the past you had to write down every line of tests that you were going to do. Now they say  you don't have to do that anymore. Look at it. Risk-based and do exploratory testing, do unscripted testing for those areas that are maybe not required to do it all.

There is an unfortunate side effect of that. The guidance is still not official. They have not published it. So the FDA has not published it. The last rumors were that it was going to be published this year. So they still have two months to do that.

Of course, we had the pandemic that slowed down many things.

There is another rumor that says that they want to update 21 CFR part 11 first. They have been saying that they were going to update part 11, I think since 2004, and it's been on the table, it's been ready to be republished as a new version and it never happens for several reasons.

I am anxiously waiting to see the guidance being published.

In the meantime, we can use pragmatism. I've been doing that for the last 20 years. But of course you still need to put the balance between documentation and pragmatic approaches.

CSA will not really replace CSV. It will reshape it and it will give us more tools. CSV and CSA are not that different.

It's just the level of documentation. Employing more risk based approach with CSA. But the validation plan and reports will still be there. So the differences are not that huge, but it will help us reduce the burden of documenting, just to create documents that will give us an artist to work more efficiently.

 

Q: How to validate a software based on AI and machine learning?

A: It is a huge challenge. If you have an artificial intelligence application, the first thing that you need to do is what is the intended use and what is the impact on product quality, patient safety, data integrity... The traditional things you have to look at when you do risk assessment. The higher the risk, the more you need to think about what could possibly happen.

If I just let my application do the thinking and do the thinking for me, if the risks are not high, you can have an approach where you provide a lot through the application. If the risks get higher, you will need to build in evaluation moments.

Let's say that AI has done its work in a test environment for about a month. Then you will need to do some validation cycle to use it in production. That's an approach that you could use now.

The official guidance around using artificial intelligence is also quite emerging now. The governmental bodies are also thinking about it. How can we use it?

Because as mentioned before, this is part of our always evolving world, which goes quicker and quicker and as big added value. So we need to be sure that we can use this. 

There is one more thing. Often with artificial intelligence, but also with other software, we have seen a shift from a paradigm shift from the traditional way of developing to the new ways of developing. I'm thinking about the term "agile development". This is something that you can use through validation, and there are also strategies how to implement making use of agile development.

 

Q: How to implement the data integrity at the organization level?

If you want to be fully aligned with the principles of data integrity, first of all, you need to lift that to the highest level of your company. If you do CSV, you will have certain applications that have to be regulated. If you talk about data integrity, it is company wide. It's really a mindset that has to live in the head of every person working for the company.

For example, if I go into the office tomorrow and somebody asks me “I've done the test last week, can you please sign it on the date of last week, Friday?” That's something very trivial and you can quickly do it, but it's a huge data integrity infringement. It's backdating and you cannot do that.

It really has to be in the company mindset, the program has to be sponsored by highest management within the company. And then of course, create a strategy, look at the different tools, which applications do I use, what are the processes? It's an entire program that you will need to initiate.

And it's not something that you can say “I'll start next week” and in the beginning of next year it will be ready. It's quite an effort, of course, depending on the company size and the reasons why you will want to implement data.

It’s not a small task, that's for sure.

Check international society of pharmaceutical engineering (ISPE) for guidance documents about data integrity.

 

Q: Are most companies using specific software for their CSV or are still using a traditional manual process to create their documentation?

Ideally it should be part of your document management system, where you can keep your documents within that protected environment and having the functionality, for example, to create electronic signatures. I would only advise to use a tool that is really intended to store validation.

Scilife has on the roadmap a validation module. What you're going to do is to define your validation project in that module. We found that CSV documents are reduced to three types of documents:

- The ones that we call static documents, documents like a validation plan, or a validation summary report where you just enter text. 

- Documents that we call linked items. So you have a document with static text, your URS where you explain something about the system. And then you're going to create URS items. Those URS items need to be linked to your Fs items, to your Ds items, to your UATs, all that. So these are called the link items. These are not created in a word document anymore, but within the validation module. There is a form you can just keep adding items. These items are identified by our unique identifier automatically within the system. Then when you create your Fs, you can link through your URS or when you create your UAT, you can link to your Fs and your URS. This generates your traceability matrix.

- The third type of document are checklists. Sometimes you need not only to link items, but you also need to execute those items.

Once you have finished all that with one click on a button, it generates all your PDFs with your link items, referring the IDs as if you would have typed it. But you're essentially working in a web based system, which is much more user-friendly, much less risk and much more efficient.

 

Q: Are planned deviations acceptable?

We don't like the term planned deviation, but there might be some situations where this is the only solution. Once again, things that are quite common and should be in the back of the minds of everyone is go about this risk-based and document it. Very clear and rational, why you are doing a plan deviation and it might be the best/worst solution that there is.

And in that case, if you do the risk assessment, if you document it clearly, if all stakeholders agree with the way over, I don't see why not. 

Anyway, we should avoid it.

 

Q: Data-driven decision-making in, in quality systems. 

If you're in a quality environment, you will have many KPIs and the way to calculate the KPIs is by getting those data. For example, how long are CAPAs open? How long are they overdue?

Only by doing that, you can find the root causes behind it and it will help you.

If you have a system that provides you with a lot of data, not just primary data, but also more intelligent data from within the system, it will help you a lot. And it's something that we see in that we are asked for.

We also notice more and more customers want to basically link their power BI solution directly to our database. We offer ways to do that, and it also triggered us to revamp completely our KPI module. 

We use the Quicksite technology, which is native from Amazon, AWS in order to generate graphs and provide the insights in the Scilife data within this KPI module. So we also needed to give our customers more insights into their QMS data. It's definitely an important trend to follow and make sure that you are looking at the right KPIs, getting the data in real time and help you with a decision.

 

Want to
Know More?

We’re here to answer all your questions
Let's chat