Takuya Takayama, Tsubasa Uto, Taiki Tsuge, Yusuke Kondo, Hironobu Tampo, Mayumi Chiba, Toshikatsu Kaburaki, Yasuo Yanagi, Hidenori Takahashi
Sensors 25(18) 2025年9月19日 査読有り筆頭著者
Retinal breaks are critical lesions that can cause retinal detachment and vision loss if not detected and treated early. Automated, accurate delineation of retinal breaks in ultra-widefield fundus (UWF) images remains challenging. In this study, we developed and validated a deep learning segmentation model based on the PraNet architecture to localize retinal breaks in break-positive cases. We trained and evaluated the model using a dataset comprising 34,867 UWF images of 8083 cases. Performance was assessed using image-level segmentation metrics, including accuracy, precision, recall, Intersection over Union (IoU), dice score, and centroid distance score. The model achieved an accuracy of 0.996, precision of 0.635, recall of 0.756, IoU of 0.539, dice score of 0.652, and centroid distance score of 0.081. To our knowledge, this is the first study to present pixel-level segmentation of retinal breaks in UWF images using deep learning. The proposed PraNet-based model showed high accuracy and robust segmentation performance, highlighting its potential for clinical application.