Why Is Data Integrity Important When Transferring Data in a Clinical Trial?

by | Mar 28, 2022 | Blog Of The Day, Industry Focus, Regulatory Focus

This piece is an extract of the upcoming ADAMAS white paper “Making the Model: Solving the Data Integrity Challenge of Decentralized Clinical Trials”.

Data integrity is an essential element of any data handling task. It goes without saying that if data isn’t accurate from the start, your results won’t be accurate. Thanks to improvements in the capabilities for collecting, storing, manipulating, and analyzing data, ensuring that data is properly validated is now more critical than ever. The process of transferring data from one device to another can introduce small silent corruptions into the database. Thus, for the integrity of the data in the trial to be ensured and for the trial to be considered Inspection Ready, there must be a validation effort that ensures any (and all) data transfers are successful, accurate, reliable, and completed without omissions or errors.

FDA and EMA, among others, are particularly interested in maintaining data integrity in clinical trials in order to accurately evaluate the safety and efficacy of experimental therapeutics. Regulators classify transferring data as a change control and therefore require that sponsors demonstrate (that at every step) the integrity of data systems remain properly managed. All the while, the challenge is how to evidence (in documentation) that proper management has been performed.

Good documentation practices, recordkeeping, and data integrity are vital elements of a sponsor’s, CRO’s, and vendors’ quality assurance system and are critical to meeting regulatory requirements. Furthermore, data integrity plays a crucial role in protecting patients and the public, with regulators and health agencies increasingly stressing the importance of the topic in guidance, citations and public comments.

 

But How Do You Evidence Proper Data Transfer and Handling?

In one survey 87% of respondents said that they used up to 10 different data sources within a single clinical trial5. Each of those transfers from different inputs and data sources should be clearly validated for regulators when they all come together into a single space (one combined data set).

The challenge is that data collection and management can be difficult in a system that combines different applications to collect and manage participant data. Each of those software systems have different owners with a variety of different vendors and software development lifecycles, meaning that data management can easily become disjointed.

Many use a spreadsheet as an inventory of systems to keep track of their data compliance. However, just compiling information at this level can be complicated. Each software system will have its own owner with a variety of different vendors. This means that your data sources can become more than just siloed away. As the participants and sites for DCCTs become more geographically distant and disparate, communication between each system can become increasingly complicated.

As part of the established expectations of a regulatory inspection, each Software System requires its own core validation (OQ) and performance qualification (PQ/UAT) as well as proper documentation of installation qualification, and the continued maintenance of proper System change control and supporting documentation..

Recently, to accommodate this need to evidence supporting documentation regarding data management across multiple Systems, there has been a re-invention of Data Management Plans (DMPs), Project Management Plans (PMPs), and extended procedures defined in the governing Quality Management System (QMS). Standard Operating Procedures (SOP) specific to the study have made attempts to govern the compilation of all data collected from many (multiple) Systems. However, where many organizations are struggling is in the ability to document the reconciliation of the data across all systems for the consolidated (single) Clinical database (that includes a Back-up Solution). This leads many to ask the questions:

  • For a Large Decentralized Clinical Trial, what is the single Model of Data Management, Handling, Storage, and transfer that satisfies compliance?
  • How do you manage this Model Across all Software Systems?
  • Where is the measure of Data Integrity that would document (for the study as a whole) attributability to Data Privacy, Data Security, Data Protection, Storage, and Transfer?
  • And what is the efficacy of Data Quality, Accuracy and Queries?

In summary, can a Large Decentralized Clinical Trial Operate Under One Unified Process of Validation and a governing Procedure that manages the Clinical Trial Data collected by all Systems?

 

In order to reduce this associated Data Integrity Risk, a new Validation Process has been developed. This Validation Process will evidence the accuracy and reliability of the complete (compiled) Clinical Data Base and its Back-up solution. When automated, this Validation Process exceeds Regulatory Compliance and ensures the DCCT is Inspection Ready. This new model of Validation Methodology will be presented in the upcoming ADAMAS white paper “Making the Model: Solving the Data Integrity Challenge of Decentralized Clinical Trials”

 

Sources:

5 Challenges and Opportunities in Clinical Data Management. Oracle. https://blogs.oracle.com/health-sciences/post/challenges-and-opportunities-in-clinical-data-management. Published 2018. Accessed September 7, 2021.

Archives