In order to assure safe, technically successful examinations and to achieve clinically relevant outcomes, ensuring endoscopic competence is a priority for GI professional societies. When a paradigm shift occurs, as was the case with the Resect and Discard (R&D) strategy for diminutive polyps, a strategy aimed at avoiding the time and expense of recovering and histologically examining diminutive polyps that have little or no malignant potential, enforcing standards is of the utmost importance. Despite the endorsement of both the American and the European Societies for Gastrointestinal Endoscopy (ASGE and ESGE) for real-time endoscopic prediction of histology of diminutive colorectal polyps, the basis for these endorsements are the high-quality standards that need be achieved, including adequate training and monitoring of the involved endoscopists.

In this issue of Digestive Diseases and Sciences, Garcia-Alonso et al. [1] address the fundamental issues involved with training in the field of optical diagnosis for the histological prediction in diminutive colonic polyps. The authors evaluated whether an ex vivo image-based single session followed by self-education training on 100 lesions, with feedback received after each diagnosis, was able to reach the performance thresholds recommended by the ASGE for < 7 mm polyps. The study involved medical students, untrained in either endoscopy or narrow-band imaging (NBI), whose progress was evaluated by the learning curve cumulative summation (LC‐CUSUM) in order to identify their level of competency. Diagnostic accuracy was subsequently evaluated during a 9-month period using 180 images of diminutive polyps captured during routine colonoscopies. The study demonstrated that none of the thresholds recommended in the Preservation and Incorporation of Valuable Endoscopic Innovations (PIVI) statement by the ASGE [2] was reached by most evaluators. Nevertheless, an improvement over time in prediction accuracy was observed for all of the raters.

Although numerous training modalities have been explored, uncertainty still exists with regard to the best teaching method for optical diagnosis and how best to certify the acquisition of competency. Indeed, even though the NBI International Colorectal Endoscopic (NICE) criteria have been validated through a systematic process to differentiate adenomatous from hyperplastic polyps, optimal teaching methods for predicting colorectal polyp histology with NBI using these criteria have not been developed.

A consistent methodology was used in the first reports on NBI characterization of diminutive polyps, which suggested that optical diagnosis could be effectively taught by simple computer-based training modules consisting of still polyp images, without active feedback [3], or short didactic teaching sessions, mainly on trainees. The authors tested the efficacy of a teaching model on an unbiased and naïve population, namely medical students, who were assumed to have little knowledge of optical diagnosis of diminutive polyps via analysis of endoscopic and chromoendoscopic images. All of the studies were performed in academic centers, but their results were confirmed in a group of academic and community-based gastroenterologists without prior experience in NBI, who underwent a 20-min, computer-based teaching module [4].

Nevertheless, these promising ex vivo results were not confirmed in vivo; indeed, as emphasized by the authors, the excellent performance levels obtained by experts in academic referral centers with an interest in novel imaging technologies have not been completely confirmed by non-experts in the community setting, representing the greatest barrier to the translation of the R&D strategy to routine practice. The short training sessions typical of ex vivo studies performed prior to in vivo imaging were insufficient to improve the diagnostic performance of optical diagnosis, suggesting that this methodology was not as easy to learn as was initially thought.

More recently, many other teaching methodologies have been tested with consistent positive results. Allen et al. [5] randomized trainees to in-classroom training (single live-teaching session with an NBI expert) versus self-directed training (using the same educational tool with recorded audio). After a test, the former received live feedback, whereas the latter received automated audio feedback. The study showed no difference in terms of accuracy in NBI characterization of polyps between the in-class teaching and self-directed learning groups, suggesting that both training methods can be utilized for the education of medical trainees in the use of NICE criteria, stressing the importance of monitoring and feedback, whether live or automated.

An improvement of optical diagnosis performance in community settings is dependent on long-lasting structured interventions. In a previous report from our group, the accuracy of NBI characterization of diminutive polyps and agreement between endoscopic and histologically based surveillance recommendations were definitely below the 90 % threshold recommended by the ASGE. In order to improve the performance, periodic monitoring and auditing with active feedback on NBI characterization for diminutive polyps was included in our internal colonoscopy quality assurance program, with consequent attainment of both PIVI thresholds for NBI predictions [6]. Accordingly, the variability in performance could be attributed to the lack of rigorous training, performance measurement, and periodic performance feedback, suggesting that exhaustive training programs with feedback should be widely implemented and validated among community gastroenterologists in order to achieve PIVI thresholds. This fascinating hypothesis has been indirectly confirmed by the Small Hyperplastic and Adenomatous Reliability Protocol (SHARP) trial [7], which also included endoscopists from the community setting. All endoscopists were required to undergo formal training with a post-training examination before participating in the study; the study produced excellent results in terms of both NBI accuracy and correctness of surveillance.

A very recent study has further confirmed and strengthened these assumptions regard the importance of both the formalization of the initial training and continuous monitoring and feedback. In a study by Vleugels et al. [8] endoscopists first underwent image-based interactive training followed by an image-based post-training test and finally a plenary discussion of all polyps included in the test. If the endoscopists failed the first, they had two more attempts to pass the test. After completing the image-based training, endoscopists had to complete the real-time test in their own routine practice, again with three attempts for training completion. Then endoscopists were asked to continue to apply optical diagnosis in their daily practice and were randomized to receive a personal 3-monthly report on outcome of their high-confidence predictions (feedback group) or to be only monitored and receive feedback at the end of the study only (observation group). The study showed that a highly selected and accredited group of endoscopists could achieve and maintain both PIVI thresholds over 1 year, independently of the regular interim feedback being provided. Interestingly, in this study the LC-CUSUM analysis identified whether endoscopists became so proficient in achieving the PIVI thresholds that further monitoring of performance was not required. Although in the article by Garcia-Alonso et al. [1] the LC-CUSUM was not able to identify competent endoscopists, since those achieving a theoretical competency during the training phase did not reach PIVI thresholds in the accuracy evaluation phase, the authors have to be complimented for choosing a quantitative measure for competency certification. Indeed, the LC-CUSUM produces a learning curve for each endoscopist, which through monitoring can provide information on an endoscopist’s learning in a continuous and step-by-step fashion. When this learning curve crosses predefined performance thresholds, the endoscopist is deemed to have either achieved sufficient competency and no longer needs monitoring or needs to be retrained; if neither performance threshold is crossed, continued observation is required, as competency has not been demonstrated.

What other options could be available in the near future to overcome the gap between academic and community endoscopists in order to achieve success for the R&D strategy? Two very recent studies have shown promising results for artificial intelligence-based models to reduce inter-observer variability in endoscopic polyp interpretation. In a study by Byrne et al. [9] an artificial intelligence model trained on endoscopic video was able to differentiate diminutive adenomas from hyperplastic polyps with high accuracy, when the high-confidence predictions were made. In a study by Chen [10] the authors developed and tested a system of computer-aided diagnosis with a deep neural network-based computer simulation (DNN-CAD) in order to analyze NBI images, predicting the histology of diminutive colorectal polyps compared with the predictions of endoscopists, reporting > 90% accuracy in polyp discrimination by DNN-CAD, whereas fewer than half of the novice endoscopists involved in the study achieved the recommended accuracy thresholds.

These studies suggest a bright future for a wide application of the R&D strategy; nonetheless, while awaiting future developments, rigorous training and credentialing represent the cornerstone for safely implementing optical diagnosis in clinical practice.