To the editor:


We appreciate the opportunity to discuss and clarify our findings [1]. Although the letter authors’ work complements our own [2,3,4,5], the objectives and the methods differ. For example, in Feinberg et al., Response Evaluation Criteria in Solid Tumors (RECIST) was applied to real-world imaging data that were submitted in case report forms by compensated physicians. In contrast, our objective was to capture real-world progression (rwP) for tens of thousands of patients from existing, passively collected electronic-health record (EHR) data without additional processing by the treating clinician (refer to our follow-up publication [6]).

RECIST assessment is a resource-intensive process that requires detailed documentation (e.g., initial categorization of all lesions as measurable/non-measurable and identification of target lesions) and specific conditions (e.g., imaging obtained at pre-specified intervals with comparable modalities) that do not prevail in the real world [7]. In our first experiment with 26 patients, we employed a step-wise approach to evaluate whether RECIST could be applied retrospectively to passively collected RWD from EHRs. Although 58% of the charts had radiology reports with descriptions potentially appropriate for RECIST assessment, these descriptions were not necessarily sufficient. Only 31% of charts had documentation of direct comparison of all measured lesions between two time points (23% of charts for non-measured lesions). Notably, no charts explicitly documented target lesions. Therefore, we concluded there was insufficient documentation to apply RECIST criteria to the existing, passively collected EHR data available for our unselected cohort. As the authors point out, active collection of data from EHR records or independent evaluation of raw images may yield different results.

In our second experiment, we tested three alternate approaches for capturing progression from existing EHR documentation that differed with regard to which documents abstractors reviewed. For the radiology-anchored approach, abstractors only captured progression documented in existing radiology reports (regardless of how the radiologist made the determination) and did not make an independent assessment or independently apply RECIST to information in passively collected imaging reports. In the clinician-anchored approach, abstractors only captured progression events documented in the clinician’s note (regardless of how the determination was made). The combination approach captured progression events from both sources. Additional work is needed to evaluate the generalizability of the clinician-anchored approach to abstract rwP at scale from existing documentation for patients with other solid tumors.

In response to the authors’ request for clarification about the two experimental sub-groups, the study was conducted in two stages. Patients were drawn as two independent random samples from the cohort of patients meeting inclusion/exclusion criteria. By chance, one patient overlapped between the two sub-groups.

The clinical and research solutions proposed by the authors (RECIST training for radiologists, retrospective review of raw imaging, and adaptation of EHR workflows) must be weighed against their cost, time, and incremental clinical utility (e.g., is a formal RECIST assessment always necessary when the cancer is generally stable?). We developed an approach to capture rwP in a commonly encountered RWD context: large volumes of static, passively collected data in which imaging results are only available in unstructured reports and re-contact is not possible. We look forward to the continuing evolution of the RWD field for the benefits of patients through the contributions and collaborations of many stakeholders, including academic researchers, regulators, industry partners, and patients.

Sandra Griffith and Rebecca Miksad (on behalf of the co-authors).