Using Computer Assistance to Bridge the Gap Between Metastatic Tumor Response Evaluation in Clinical Trials and Clinical Practice

Using Computer Assistance to Bridge the Gap Between Metastatic Tumor Response Evaluation in Clinical Trials and Clinical Practice


Dr. Brian C. Allen, credit Jared Lazarus

Dr. Andrew D. Smith

By Brian C. Allen, MD, and Andrew D. Smith, MD, PhD

Article Highlights

  • In clinical trials, metastatic tumor response is almost always evaluated according to RECIST 1.1, but it’s almost never used in clinical practice.
  • RECIST 1.1 requires communication and coordination that must continue over time and remain unbroken, which is impractical in daily clinical practice. This generally leads to a subjective assessment of tumor response, both by the radiologist and the oncologist. Is there a way for us to objectively evaluate tumor response in clinical practice?
  • Computer-assisted tumor response evaluation (CARE) could be the answer. CARE prevents target lesion selection errors, data transfer errors, mathematical computational errors, and categorical response selection errors. It can improve the efficiency of the tumor response evaluation process, longitudinally track data, automate objective response categorization according to RECIST 1.1 and/or a variety of other objective tumor response criteria, and track a variety of quantitative tumor imaging metrics.

In most oncologic clinical trials and in routine clinical practice, serial CT imaging is used to monitor metastatic disease response to a variety of systemic therapies. CT imaging is widely available and contains quantitative information, particularly with respect to measuring changes in tumor size. However, there are many differences in how tumor size and response are assessed in clinical trials compared with clinical practice.

In Clinical Trials

In clinical trials, metastatic tumor response is almost always evaluated according to objective imaging criteria. Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 forms a common language for objectively defining metastatic tumor response and is the most commonly used tumor response criteria. How target lesions are selected, measured, and followed over time are precisely defined by RECIST 1.1. Briefly, unidimensional measures of up to five target lesions are summed at each time point, and the percent change relative to the start of therapy or the lowest tumor burden is calculated and combined with information on the presence or absence of new metastases to determine objective tumor response. The same target lesions are tracked over time, and progressive disease (PD) is defined as the development of new metastases, an increase in total tumor burden by 20% compared with the lowest tumor burden over the course of therapy, or unequivocal progression of nontarget lesions.

Objective response according to RECIST 1.1 is critically important, as this determines objective response rate (ORR) and progression-free survival (PFS), which are major endpoints in many clinical trials. Furthermore, PD according to RECIST 1.1 determines the end of therapy for patients in clinical trials, which influences overall survival (OS) on both an individual level and a study-wide level. These study endpoints are used to determine which systemic therapies will receive U.S. Food and Drug Administration (FDA) approval.

To perform RECIST 1.1 in clinical trials, longitudinal data are manually captured in case report forms and are entered into electronic data capture devices. As RECIST 1.1 is somewhat burdensome to perform and the use of multiple different radiologists leads to inconsistencies in tumor assessment, some phase III clinical trials are required to utilize central review of images. Most commonly, two readers independently interpret all images according to RECIST 1.1, and disagreements are resolved by an adjudicator (a third independent opinion). The central review of images is meant to further improve standardization and reduce errors and inconsistencies.

Anyone outside of our field would, therefore, naturally think that we would use similar methods to objectively evaluate tumor response in clinical practice. Specifically, one would think that we would use RECIST 1.1 to reduce subjectivity and utilize methods that improve consistency and reduce errors. Yet, this is almost never done.

In Clinical Practice

In clinical practice, there are few rules and little standardization in the reporting of oncologic studies. Although most radiologists are familiar with RECIST, few have an in-depth knowledge, and the choice of target lesions in clinical practice does not typically follow RECIST 1.1 criteria, particularly with respect to the total number of target lesions, number of target lesions per organ, and measurement of lymph nodes. Most radiologists report bi-dimensional tumor measurements of several index lesions on the initial scan and on follow-up imaging, and reporting of oncologic studies is streamlined for efficiency. Information regarding treatment start date and which response criteria are best for the particular tumor and therapy are frequently unavailable because of a lack of communication between the oncologist and radiologist.

Despite access to electronic medical records, voice recognition software, and digital image viewers with a variety of measurement tools, most of the steps in tumor response assessment are performed manually. Target lesion selection, measurements, and data transfer to the electronic medical records are all performed manually, and the radiologist does not sum “target lesion” measurements to determine total tumor burden or calculate percent changes in tumor burden in a longitudinal fashion. The situation is further complicated in that studies are being read by a variety of radiologists, frequently using free-form reporting styles that are prone to variability. Most radiologists compare the current examination to the immediate prior examination and don’t have information on the lowest tumor burden or when it occurred, making it impractical to utilize RECIST 1.1. In addition, radiologists may choose new “target lesions,” omit prior target lesions, or re-measure lesions on the prior examination.

RECIST 1.1 requires communication and coordination that must continue over time and remain unbroken. It is simply impractical to perform RECIST 1.1 in daily clinical practice. This generally leads to a subjective assessment of tumor response, both by the radiologist and the oncologist. However, a subjective approach is never used in clinical trials. Is there a way for us to objectively evaluate tumor response in clinical practice?

Yes. Computers are used throughout industry and medicine to improve processes, and they absolutely can be used to improve tumor response assessment.

Computer-Assisted Tumor Response Evaluation

Most major imaging vendors are developing software solutions to improve tumor response evaluation. These software solutions can be referred to as computer-assisted tumor response evaluation (CARE). Each vendor has a different form of CARE, but the following is how it would ideally work.

A baseline CT imaging study from a patient with metastatic disease is flagged for objective tumor response assessment via CARE. The radiologist opens the study within the cloud-based CARE platform, and target lesions are selected and measured (with each vendor using a slightly different method). The CARE platform includes all of the rules of RECIST 1.1 and many other objective response criteria. If the radiologist makes an error and deviates from RECIST 1.1 (e.g., selects more than two target lesions in one organ system or selects a lymph node that measures < 1.5 cm on the baseline exam), the system identifies the error in real time and prompts the radiologist to correct the error before moving forward. After all target lesions are selected and nontarget lesions are identified, the imaging metric data are automatically extracted and exported to a database and optionally into the electronic medical record before or after oncologist review. The total tumor burden is automatically calculated and stored. The key images and all tumor metrics are available for review by the oncologist in a format that best suits them (e.g., tables and/or graphs).

Depending on how the target lesions are selected, the CARE platform may extract additional quantitative metrics beyond unidimensional tumor length. Other quantitative metrics may include mean attenuation (i.e., density), CT texture, the vascular tumor burden (VTB), the percentage of necrosis, etc. Changes in these metrics may overcome known limitations of RECIST 1.1. For example, antiangiogenic agents devascularize tumors, and the VTB may more accurately reflect metastatic tumor response than tumor length measurements.

With CARE, the oncologist can prospectively or retrospectively enter the start of therapy for future assessments to compare with the baseline examination. This information is no longer needed up front, as the computer can adjust and perform the necessary calculations. When subsequent CT examinations are encountered, the above tumor assessment process is repeated. The radiologist is directed to measure the same target lesions and records information on nontarget lesion response and the presence or absence of new metastases. All of this information is automatically extracted and stored, and the computer can automatically derive objective response per RECIST 1.1 or any number of different response criteria (e.g., immune-related RECIST, Choi criteria) by comparison with tumor burden at prior time points.

In review, CARE prevents target lesion selection errors, data transfer errors, mathematical computational errors, and categorical response selection errors. CARE can improve the efficiency of the tumor response evaluation process, longitudinally track data, automate objective response categorization according to RECIST 1.1 and/or a variety of other objective tumor response criteria, and track a variety of quantitative tumor imaging metrics. CARE can provide key images and data to the oncologist in a customized format that can then be archived in the electronic medical records, and it can be used to improve communication with the patient about their individual tumor response. Picture the oncologist discussing the treatment with their patient using key images and colored graphical displays that illustrate longitudinal tumor response. Finally, CARE can be used on a departmental or institutional level for practice quality improvement by evaluating outcomes and practice patterns.

As metastatic tumor response evaluation defines critical endpoints in oncologic patient care, methods that reduce errors, reduce time of evaluation, and improve documentation and communication could lead to needed advancements in clinical trials and clinical care. CARE may be the future of objective tumor response evaluation and has the potential to improve clinical practice in oncology. You can learn more about CARE research by reviewing Abstract 432 from the 2017 Genitourinary Cancers Symposium, in which the authors compare CARE with the standard of care (i.e., manual methods) in a multi-institutional study and show how common errors in tumor response evaluation are eliminated and overall efficiency is improved using CARE.

About the Authors: Dr. Allen is assistant professor of radiology with the Duke University Medical Center. Dr. Smith is associate professor in radiology/physiology and director of radiology research with the University of Mississippi Medical Center.