KLI

의료영상에서 딥러닝을 이용한 영상분할 연구

Metadata Downloads
Abstract
Medical image segmentation plays a critical role in computer-aided diagnosis, image quantification, and surgical planning, which identifies the pixels of homogenous regions including in organs and lesions and provides important information about the shapes and volumes of the organs and lesions. However, it could be one of the most difficult and tedious tasks to be performed by humans consistently. Therefore, there has been a good amount of research to propose various semi- or automatic segmentation methods, which depend mainly on conventional image processing and machine learning methods. However, these methods may be vulnerable to variations in image acquisition, anatomy, and disease. Due to the above problems faced in conventional image segmentation methods, many scholars continue to seek more robust medical image segmentation methods.
In recent years, the deep learning model has been widely applied and popularized in computer vision. This success has been rapidly applied in the area of medical imaging. In particular, deep learning has achieved a leap in precision and robustness with regard to variations of anatomy and disease. Several deep convolutional neural network (CNN) models have been proposed such as Residual Net, Visual Geometry Group (VGG), fully convolutional network (FCN) and U-Net. These models provide not only state-of-the-art performance for image classification, segmentation, object detection and tracking tasks but also a new perspective on image processing. Therefore, at present, deep learning can assist radiologists and surgeons to segment various anatomic structures as a reference and multiple abnormalities in computed tomography (CT) or magnetic resonance imaging (MRI) images.
In this research, we conducted various experiments to find and evaluated adequate deep learning based semantic segmentation models in medical images from the viewpoint of their accuracy in the clinical context. The aim of this study was two-fold as follows: 1) identifying and/or developing a deep learning based semantic segmentation model and the properties of an imaging modality that are adequate for the clinical context. 2) Solving specific tasks including smart labeling with humans in the loop, fine-tuning the models with different label levels in imbalanced datasets, and comparing deep learning and human segmentation where these models are developed and applied. For achieving these tasks or meeting these objectives, we proposed a fully automatic segmentation network with various kinds of CNN models considering organ-, image modality-, and image reconstruction-specific variations. Toward this, segmentation of a glioblastoma and acute stroke infarct in brain MRI, mandible and maxillary sinus in cone-beam computed tomography (CBCT), breast and other tissues in MRI, and pancreas cancer in contrast-enhanced CT were all performed in actual clinical settings. Basically, in case of slices with more thickness, 2D semantic segmentation shows better performances. Additionally, pre-processing is sensitive to developing robust segmentation that needs image normalization and various augmentations. However, because the modern graphics processing unit (GPU) lacks memory for 3D semantic segmentation, cascaded semantic segmentation or patch-based semantic segmentation gives better results. An anatomic variation could be easily trained by semantic segmentation, but disease variation of cancer is hard to be trained. Further, size-invariant semantic segmentation could be one of the important issues in medical image segmentation. Variation of contrast agent uptake may be vulnerable to the overall performance of semantic segmentation. For multi-center evaluation, subtle variation including variations in vendors’ image protocols and high noise levels at different centers may cause problems to train robust semantic segmentation. Furthermore, as labeling of semantic segmentation is very tedious and time-consuming, deep learning based smart labeling is needed.
Based on theses issues, we have developed and evaluated various applications with semantic segmentation in medical images including smart labeling, robust radiomics analysis and disease pattern segmentation, and automated segmentation.
We concluded that adequate semantic segmentation with deep learning in medical images can improve the segmentation quality, which can be helpful in computer-aided diagnosis (CAD), image quantification, and surgical planning in actual clinical settings. Medical image segmentation and its application may be sufficient to provide practical utility to many physicians and patients who do not need to learn sectional anatomy.
Author(s)
함성원
Issued Date
2021
Awarded Date
2021-02
Type
Dissertation
Keyword
스마트 레이블링캐스캐이드 합성곱 신경망합성곱 신경망딥러닝자기공명영상전산단층촬영술의미론적 분할
URI
https://oak.ulsan.ac.kr/handle/2021.oak/5903
http://ulsan.dcollection.net/common/orgView/200000367611
Alternative Author(s)
Ham, Sungwon
Affiliation
울산대학교
Department
일반대학원 의학과의공학전공
Advisor
김호성, 김남국
Degree
Doctor
Publisher
울산대학교 일반대학원 의학과의공학전공
Language
eng
Rights
울산대학교 논문은 저작권에 의해 보호받습니다.
Appears in Collections:
Mechanical Engineering > 2. Theses (Ph.D)
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.