Overview
The Data Consistency: SEND Datasets and the Study Report Project was formed at PHUSE CSS 2016 and tasked with identifying and addressing potential inconsistencies between the study report and SEND v3.0 datasets. Team members identified scenarios which could lead to inherent differences between SEND datasets and study reports. The team created a listing of potential inconsistencies identified and associated recommendations, which is provided in this paper.
Problem Statement
SEND datasets are intended to reflect the study design and individual data listings of a nonclinical report. However, differences between the report and SEND datasets can occur. These factors are generally due to differences in the processes and systems used to collect and report raw data and to generate SEND datasets. Many companies, CROs, and vendors are creating parallel paths for the generation of SEND datasets and data that go into the study reports. Subsequently, there are and will continue to be inconsistencies between the study reports and the SEND datasets. The effort required to understand, correct, and eliminate differences is still under evaluation and may be significant. Feedback received from the agency based on actual SEND submissions may improve our understanding and may impact the initial recommendations provided here. During PHUSE discussions, it became apparent that best practices were needed to help SEND implementers decide what methods should be used to address these differences.
Background/Scope
The implementation of SEND dataset creation processes (e.g., tools, methods) varies from organization to organization. The Data Consistency Project determined that one of the main challenges in implementation is the potential for inconsistencies between the study report and the SEND datasets. The list of potential inconsistencies was developed to determine the impact of the SEND datasets differing from the study report. This list is based on sponsor experience as well as scenarios that surfaced through the FDA Fit for Use Pilot. It should be noted that this list may not be exhaustive. After the scenarios were evaluated, the team developed recommendations to address those inconsistencies. Recommendations provided in this paper may require additional feedback from the FDA.
Table of Discrepancies
SEND Dataset and Study Report discrepancies were categorized by impact. Low impact discrepancies are expected to have minimal effect on study data interpretation and conclusions. High impact discrepancies could lead to misinterpretation of conclusions drawn from reviews of SEND datasets and those presented in Study Reports and/or data collection process changes by sponsors and CROs. It should be noted that the impact assessment assigned to each scenario is based on the Projects assessment at the time the scenario was considered. They are from the sponsor’s perspective and not the reviewer’s and may be adjusted as feedback from the regulatory agency(ies) is provided. Conditions unique to an organization or differences in severity could change the impact assessment. Based on this assessment, this team has identified recommendations described in the next section.
Table 1. Low Impact Discrepancies – Explain discrepancy in the Nonclinical Study Data Reviewer’s Guide (nSDRG)
Scenario #DescriptionSEND DatasetsStudy ReportImpact
1Some pretest data may be present in SEND datasets but not in study report.Pretest data may be present since some systems are unable to filter out this data.Pretest data may or may not be present, or only a subset is present.Low
2Subjects are scheduled for toxicokinetic collection; in-life data for these subjects are collected but not reported.Data, e.g., Body Weights, Clinical Signs, for these subjects may be present.Data for these subjects may not be included, or only a portion is included.Low
3Baseline bio-analytical pretest data, e.g., when provided by a Bio-analytical CRO, is included for animals that were not eventually randomized onto a study.Data may not be present.Data may be present in appendix with the bio-analytical data.Low
4Differences exist in the presentation of significant figures between SEND dataset and study reports.

Low
5Data is collected for investigative purposes or veterinary assessments (e.g., body temperature).Data may be present if collected in data capture system.Data may not be present.Low
6Unscheduled veterinary assessments may be included in the study report, but not in the SEND datasets.Data will not be present.May be explained in the study report.Low
7Sentinel animals included as part of carcinogenicity studies could be included in the trial design domains.May or may not be presentWill not be presentLow
8In the study report the age at time of receipt is frequently listed; in the DM domain the required data references age at start of dose.One of the following variables will be present:AGE = exact date, AGETEXT = range. Both refer to age at start of dose.Age at time of receipt or age at time of dose will be reported.Low
9Instruments or equipment collect more test-related data than are required by protocol.It may or may not be possible to filter out test related data not being reported.Not all test related data are reported.Low
10Animals are "swapped" or substituted out prior to or after first dose (e.g., replacement animals).There are a variety of processes for replacing animals assigned to study and depending on tool or processes used, data may or may not be present.Data may or may not be present. However, an explanation of the replacement will likely be present. If it is after the first dose, the explanation should be clear enough to avoid questions.Low
11Food in/out is collected daily but reported weekly.Value will be represented as daily consumption.Value will be represented as weekly consumption.Low
12Food consumption is collected per cage in the data collection system, but then reported by animal in the report tables. The SEND reporting system is then likely to report by cage, unless manual intervention is done.Food consumption data may be reported by cage; e.g. two animals per cage result in a food consumption of 100 g / day in total. POOLID would have been created for the two animals.Food consumption data may be reported by animal; e.g. each of those two animals consumes 50 g/animal/day. Study report should describe reporting logic.Low
13Numeric ophthalmic data (results of a microscopic (e.g. slit lamp) ophthalmic examination)May not be presentMay be presentLow
14Dermal data are present in study report, but not in the SEND Datasets.May not be presentMay be presentLow
15Values for body weight gain are represented differently between SEND dataset and study report.Values will be reported interval to interval, but intervals may vary.Values could be reported cumulatively or in another customized way. Report may include additional cumulative gains (e.g., from start to termination or for recovery phase). Report may contain no body-weight gain data.Low: Body-weight gains over any interval can be calculated from BW data if needed by reviewers.
16CLSTRESC is less detailed than the Study Report*SENDIG has no single (standard) representation for CL data.Report format may vary by testing facility and/or data collection/reporting system.Low

''''

*From the Technical Conformance Guide: Clinical Observations (CL) Domain:
Only Findings should be provided in CL; ensure that Events and Interventions are not included. Sponsors should ensure that the standardization of findings in CLSTRESC closely adheres to the SENDIG. The information in CLTEST and CLSTRESC, along with CLLOC and CLSEV when appropriate, should be structured to permit grouping of similar findings and thus support the creation of scientifically interpretable incidence tables. Differences between the representation in CL and the presentation of Clinical Observations in the Study Report which impact traceability to the extent that terms or counts in incidence tables created from CL cannot be easily reconciled to those in the Study Report should be mentioned in the nSDRG.
Table 2. High Impact Discrepancies - Reconcile if possible and if not, explain discrepancy in the nSDRG.
Scenario #Brief DescriptionSEND DatasetStudy ReportImpact
1Numeric LB (and PC) values with leading characters (˃, <, etc.) or BLQ, or BLOQ are represented differently in SEND datasets and study report.LBORRES = BLOQ or LBORRES = <1 will result in null value for LBSTRESN; SUPPLB = numeric.“<”, “>” and numeric value will be present for purpose of calculation.High: The numeric value used in study reporting should be reflected in SUPPLB or SUPPPC. If not reflected in SUPPLB or SUPPPC it should be clearly addressed in the NSDRG. Longer term it will be beneficial to have CT for ‘below limit of quantification” to be populated in STRESC , a value included in LLOQ and the unit in STRESU to apply to both. .**See table below for recommendation.
2Correlations made in the study report are not reflected in RELREC (e.g. data is collected in 2 different systems).Relationships between data points may not be present only in RELREC.Relationships between data points will be present. There may be correlations in study report that are not in the pathology system because they might have been made post data-collection and capture (e.g., may be part of the discussion section).High/Low
3Data points labeled with “Day 0” may be present when in SEND dataset “Day 1” is expected.Day 1 will be expected but Day 1 may be present.Day 0 will be reported.High. Will exist until systems change.
4Textual differences (controlled terminology) between reports and SEND data sets.Uses standard for Controlled TerminologyMay or may not use standard for Controlled Terminology. If the differences are impactful to data interpretation, it is recommended that they be listed out in the nSDRG.Low/High
5Modifiers for pathology data may or may not be listed in the Study Report but should be in the SEND dataset (MA/MI domains).Modifiers will be listed in --SUPP and/or --STRESC, in addition to --ORRES depending on how data are collected and base processes are defined by the sponsor.Modifiers may or may not be listed as part of base finding. May lead to differences in incidence counts. The STRESC value must be scientifically meaningful but may not match the value in the incidence tables.High
6Nomenclature of severity may differ between SEND dataset and study report.Will be present as Controlled Term because it is mapped to Controlled Terminology (CT).Severity will be listed as defined by sponsor.High
next numberscenario descriptioncontents of SEND datasetcontents of study reportimpact (should be "medium" or "high" for this table)
**When STRESC is a non-numeric
AcceptableSTRESCLLOQSTRESU applies to STRESC and LLOQ
yes<lloq
ug/uL
yes>uloq
ug/uL
Yes (preferred)BLQ200ug/uL
noBLQ200
no<200200
yes<200200ug/uL
Recommendations

There are and will continue to be inconsistencies between SEND datasets and study reports. There are various reasons for these differences which have been previously described. It is recommended that any data inconsistency be highlighted in the Nonclinical Study Data Reviewer’s Guide (nSDRG). This is necessary for the reviewer to understand the context in which the SEND data are provided. While it is expected that addressing inconsistencies in the nSDRG will be sufficient during the early implementation of SEND, it does not preclude seeking technical solutions for them.

Low Impact Discrepancies: Resolution Is Not Necessary or Data Inconsistency is Acceptable
Data Inconsistencies that are deemed low impact would have minimal or no impact on reviewer interpretation of the SEND datasets versus the Study Reports. Regardless of this lack of impact to interpretation, the discrepancies should still be called out in the nSDRG.

High Impact Discrepancies: Resolution Is Necessary or Data Inconsistency Should Be Resolved
Data Inconsistencies that are deemed high impact could potentially lead to differing interpretation of the SEND dataset versus the report for a given study. For these types of inconsistencies it is recommended that wherever and whenever possible, the sponsor should reconcile differences between datasets and study reports using the tool(s) or manual processes available to them to do so. It is understood that data reconciliation processes must be sustainable so as not to put an undue burden on industry. It is here where vendors providing the technical tools can have the most impact. Technical vendors are encouraged to mature their products to help industry with the reconciliation of these high impact scenarios.
Additionally, sponsors and CROs need to consider moving toward collecting data in SEND format wherever possible, including changing lexicons/libraries to use controlled terminology instead of mapping, collecting all data electronically, and setting up studies in ways that will allow consistent, automated population of trial design domains.
Some discrepancies worth addressing specifically are:

  1. Severity mapping must be called out in the nSDRG if the severity scale in reporting is different from the SEND standarda . An update to the Technical Conformance Guide (TCG) may address this issue. Sponsors are encouraged to stay current with TCG updates.
  2. If reports are using legacy terminology that is not evident relative to the standard (e.g., LB domain) the differences could be impactful to data interpretation. It is recommended that these differences be represented in a mapping table, specific to the study, in the nSDRG, that lists the LB parameter name mapped to the corresponding SEND standard term. See example below.
Example
SEND DatasetReport
Chemokine (c-X_C Motif) Ligand 1Growth Regulated Factor
BasophilsAbsolute Basophils
Basophils/LeukocytesBasophils
SGOT/Serum glutamic oxaloacetic transaminaseAST/Aspartate Aminotransferase

FDA reviewers may seek to replicate the incidence table in the study report with SEND data. It is recommended that sponsors evaluate their MA/MI SEND data for the potential to do this and within reason adjust lexicons or processes to enable.


https://wiki.cdisc.org/pages/viewpage.action?pageId=31313279

No Resolution Immediately Available or Data Inconsistency Resolution is Not Possible (Model Limits)

There are inconsistencies that cannot be resolved by the sponsor at this time namely those inconsistencies due to model limits, for example:

  • Data points labeled with “Day 0” may be present when in the SEND dataset “Day 1” is expected
  • Values for body weight gain are represented differently between SEND dataset and study report.
  • Endpoints that are reported but not modeled.

As with other differences, explanation should be given in the nSDRG. It is also recommended that these types of discrepancies be addressed in a near future update to SEND guidance documents, e.g. the Technical Conformance Guide and the SEND Implementation Guide and the companion Confirmed Data Endpoints document.

Outstanding Issues/Next Steps

For many of the report/dataset consistency issues identified by our team, we were able to make recommendations. It was recognized that an immediate-term solution would be needed (typically, referencing the anomaly in the nSDRG) while the evolution of documentation and collection practices, as well as collection and reporting systems, adapt to the requirement for electronic data submissions in a standardized format. Still, there were a few items where the team felt that direct feedback from the FDA would provide a definitive direction for the industry. These items are categorized and summarized below. An open conversation with a panel of FDA reviewers would be useful in order to develop a go-forward recommendation.

ISSUE 1: Pretest data may or may not be present in the report and datasets in equal amounts.

BACKGROUND: At this stage, most systems that output SEND data present “all” or “nothing” as a standard when it comes to records. On the other hand, most systems that output tables for study reports allow parsing of records in order that data tables in reports can present different groupings of datapoints based on relevance to the individual study. Therefore, if pretest data are collected, it will likely be output into datasets, and more than likely left out of data tables in reports. Some SEND output systems allow the “extra” data to be removed and some do not; in all cases, manual effort is needed to make the datasets and reports match. The Fit For Use pilot had one comment from a reviewer that indicated referencing this difference in the nSDRG was sufficient.

QUESTION: Is referencing this difference in the nSDRG sufficient or is there an expectation that the datasets will have the same number of records as the report related to pretest/acclimation data?

ISSUE 2: There are often rounding differences when looking at raw data vs study reports vs datasets.

BACKGROUND: Formatting applied in reporting may display less significant digits than the underlying data reported in the SEND datasets that leads to rounding differences.

QUESTION: Is it acceptable to simply state in the nSDRG that the rounding differences exist?

ISSUE 3: Data/metadata for permissible or occasionally expected variables may be in the report but not in the datasets.

BACKGROUND: There may be method or qualifier data label variables (typically permissible) that are covered in the report and/or are recorded somewhere, but are not collected in the data collection system in a way that the system can output into SEND (e.g., LBMETHOD inclusion of the metadata requires manual intervention.

QUESTION: In what scenarios is it acceptable for permissible variables included in the study report to not be submitted electronically, (e.g., LBMETHOD) considering the effort that manual intervention would require?

ISSUE 4: What is the expectation with regard to presentation of data related to replaced animals?

BACKGROUND: Each organization seems to have a different process for replacing animals on a study. Most processes are summarized in the study protocol and are time bound, based on length of study and species, and at least slightly influenced by collection system functionality.

QUESTION: Is the inclusion of replaced animal data meaningful to a reviewer? Is there a difference as to timing (i.e., before first dose/after first dose)? Is explanation in nSDRG sufficient?

ISSUE 5: There may not be RELREC information in the dataset if two different vendors collected the data in two separate systems. Correlations may be described in the study report in a table, but a RELREC dataset may not have been created.

BACKGROUND: There are occasions when, for instance, necropsy data are collected at the test facility and sent to a separate test site for microscopic examination. Correlations between MA and MI data are typical records within the RELREC domain. However, when the data are collected in different systems at different sites, the correlations that might otherwise occur within a collection system are no longer available.

QUESTION: Does the FDA expect manual correlation of RELREC data for MA/MI? For other domains?

ISSUE 6: The Body Weight intervals reported in study listings may not reflect the intervals in the BG dataset. Study report may have total gain for the study, or may show cumulative body-weight gains at each interval, while the SEND datasets may contain weekly intervals.

BACKGROUND: Most SEND tools generate the BG datasets based on the intervals between scheduled weight collections. The study reports or listings can reflect body weight differences between any interval, the most common being first and last body weight collected, an interval not reflected in BG by most tools.

QUESTION: Why do we have to submit BG when BW data would allow FDA , with Janus, to analyze the data as desired Question: How important is it that the intervals in the SEND dataset match those in the study report exactly?

Conclusion 

Differences between the CDISC SEND (SEND) datasets and the study report are likely and should be listed and explained in the Study Data Reviewer’s Guide (nSDRG) to help facilitate the review process. There are three primary reasons for differences between the SEND datasets and the study report.

First, most sponsors have existing mature LIMS systems whose data standard are not reflective of the SEND Controlled Terminology. Unless and until which time the source systems adopt SEND Controlled Terminology at data collection, there will be differences between SEND data sets and study reports. The burden for sponsors to incorporate controlled terminology into mature systems may be too significant to reconcile.

Second, the tools and programs used to generate the study report’s tables and listings are usually separate from the tools and programs used to generate SEND datasets.

This is further complicated when multiple systems are used to collect, analyze and report nonclinical study data where merging data into SEND domains is required. This is more complex than the document merge of tables from multiple systems into one final study report.


Finally, many of the SEND generation tools are still in their early stages of development and present functional limitations, particularly around data filtering, which introduces differences between the study report and the SEND datasets. The tools and programs used to produce the study report’s tables and listings typically provide an opportunity to exclude data at a fairly granular level for reporting purposes. The SEND tools may not provide the ability to filter these data at the same level of granularity. As these SEND tools mature, the ability to filter the data included in the SEND datasets to better align with the reported dataset should improve reducing these differences.

If all of the differences outlined above are properly explained in the nSDRG as described above, none should impact the ability to leverage the SEND data in the submission review process. As our processes and tools continue to mature through our continued practice of generating SEND packages, these differences should diminish. Another way to eliminate the potential for differences between the SEND datasets and the study report’s tables and listings is to eliminate one of the artefacts; specifically some or all of the data listings. The value of the data listings in the review process has diminished since the introduction of the SEND datasets, which provide the same data in a format more easily reviewed by available tools. We support moving in this direction but understand it will take time and confidence in the SEND related processes for both the sponsors and regulatory agencies.

  • No labels