KLI

인공 지능을 이용한 방사선 촬영 영상 분석을 통한 비골 골절의 진단과 임상적 적용

Metadata Downloads
Alternative Title
Diagnosis of nasal bone fracture and clinical application with nasal bone x ray using artificial intelligence
Abstract
Introduction For the diagnosis of nasal bone fractures, imaging modalities play an important role. Among these, X-ray imaging is known to be easy to perform, have a low risk of radiation exposure, and have a relatively high sensitivity of over 80%. However, it is difficult for unexperienced doctors to diagnose fractures with X-ray images only. Recently, the field of medical imaging diagnosis is showing new changes by converging cutting-edge technologies in addition to conventional methods. Within the current trajectory of transformation, artificial intelligence assumes a pivotal role in optimizing precision and efficacy in diagnostic processes. In this study, our objective is to devise a method for enhancing the accuracy and sensitivity of nasal bone fracture diagnoses via X-ray assessment, employing a pre- training technique. Methods Data set The dataset utilized in this study comprises a total of 8,116 lateral cephalogram images, obtained through radiographic imaging for suspected fibula fractures from 4,058 patients, spanning from January 2010 to March 2023. The dataset was partitioned using a random sampling technique, allocating a ratio of 7:1:2 for training, validation, and testing, respectively. Imaging crop Utilizing a internally developed landmark-setting program, we establish reference points and demarcate intermediate points between these landmarks. Subsequently, the image is cropped based on these reference and intermediate points. Training architecture (a) Pre-training Subject : Fracture line segmentation For this purpose, the region of interest (ROI) pertaining to the nasal bone fracture line was labeled by a skilled clinical expert. Additionally, the set of 360 images utilized in pre-training was exclusively allocated to the training and validation datasets within the entire dataset, aiming to mitigate distorted assessments in nasal bone fracture classification. (b) Training Subject : Fracture classification The model architecture proposed by Nam et al. (2022) was established as a baseline using ImageNet pretrained weights. In this study, the encoder of the Efficientnet backbone pretrained was transferred (i.e., applied as the model's weights) to the fracture classification model. (c) Training Subject : Fracture classification Based on Nam et al. (2022), image views were systematically altered and input into two distinct classification models. Results In this study, for the quantitative evaluation of the model, sensitivity, specificity, accuracy, and the Area Under the Receiver Operating Characteristic Curve (AUC) were utilized. Sensitivity, specificity, and accuracy were calculated by determining the threshold with the largest difference between the true positive rate and false positive rate on the tuning set and applying it as the optimal threshold for the test set. The optimal thresholds were 0.288 in the baseline (Nam et al., 2022) and 0.667 in our method. For 256x256 image size, the baseline model showed an accuracy of 70.1%, sensitivity of 63.2%, and specificity of 77.5%, while our method demonstrated an accuracy of 72.4%, sensitivity of 65.2%, and specificity of 80.0%. For 448x448 image size, the baseline model exhibited an accuracy of 72.9%, sensitivity of 71.1%, and specificity of 74.7%, whereas our method showed an accuracy of 73.0%, sensitivity of 64.0%, and specificity of 82.5%. When comparing the results between the baseline and our method, our method exhibited higher specificity, but sensitivity and AUC showed relatively lower values. Additionally, Grad-CAM was used to visualize regions of interest for fracture prediction in images. Grad-CAM analysis revealed high interest in the Nasal bone region in both true positive and false positive images. Conclusion This study derived a method to facilitate the diagnosis of patients with nasal bone fractures solely through radiographic examinations, enabling less experienced physicians to do so effortlessly. In contrast to prior research, the application of a pre-trained model through labeling demonstrated high accuracy, sensitivity, and specificity on smaller image sizes.
Author(s)
민재청
Issued Date
2024
Awarded Date
2024-02
Type
Dissertation
URI
https://oak.ulsan.ac.kr/handle/2021.oak/13027
http://ulsan.dcollection.net/common/orgView/200000729563
Alternative Author(s)
MIN JAE CHUNG
Affiliation
울산대학교
Department
일반대학원 의학과
Advisor
최종우
Degree
Master
Publisher
울산대학교 일반대학원 의학과
Language
kor
Rights
울산대학교 논문은 저작권에 의해 보호받습니다.
Appears in Collections:
Medicine > 1. Theses (Master)
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.