Abstract
Objective: The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)–based computer vision techniques. Study Design: Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into “Training and Validation” and “Test” data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. Results: The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was “U-Net+Densenet121” (mean intersection over union [mIoU] = 0.501; Dice coefficient = 0.569). Performance of all architectures degraded on the “Test” data set; “U-Net” delivered the best performance (mIoU = 0.402; Dice coefficient = 0.453). Interradicular radiolucencies were the most difficult to segment. Conclusions: DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort.
Original language | English |
---|---|
Pages (from-to) | 711-720 |
Number of pages | 10 |
Journal | Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology |
Volume | 131 |
Issue number | 6 |
Early online date | 16 Sept 2020 |
DOIs | |
Publication status | Published - Jun 2021 |
Bibliographical note
Funding Information:The work was supported by the University of Jeddah, Saudi Arabia (UJ-20-097-DR), and by a gift of $20,000 from Amazon Web Services.
Copyright © 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/