KLI

심층 합성곱 신경망 기반의 알츠하이머 생쥐 모델에서 공간 정규화 과정이 필요 없는 표준판 기반 개별 뇌 PET 관심 부피영역 생성 방법

Metadata Downloads
Abstract
Purpose Although skull-stripping and brain region segmentation are essential for precise quantitative analysis of positron emission tomography (PET) of mouse brains, deep learning-based unified solutions, particularly for spatial normalization, have posed a challenging problem in deep learning-based image processing. In addition, routine preclinical/clinical PET images cannot always afford corresponding MR and relevant VOIs. In this study, we propose an approach based on deep learning to resolve these issues. We generated both skull-stripping masks and individual-brain-specific volumes-of-interest (VOIs – cortex, hippocampus, striatum, thalamus, and cerebellum) based on inverse-spatial-normalization (iSN) and deep convolutional neural network (deep CNN) models. Furthermore, based on the aforementioned methods, we directly generated target VOIs from 18F-fluoro-deoxyglucose positron emission tomography (18F-FDG PET) images not using MR but using only PET images.
Materials and methods We applied our devised our devised methods to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer’s disease. Eighteen mice underwent T2-weighted MRI and 18F-FDG PET scans twice, before and after the administration of human immunoglobin or antibody-based treatments. For training the CNN, manually-traced brain masks and iSN-based target VOIs were used as the label. For model performance evaluation, DSC, ASSD, SEN, SPE, and PPV between deep learning label mask and deep learning generation mask was evaluated, and the correlation of mean counts and SUVR using each mask (DL-generated mask, DL label mask (i.e., inversely normalized template (ground-truth) VOIs), and template-based ground-truth VOI) was compared.
Results In visual assessment, our deep CNN-based method not only successfully generated brain parenchymal tissue masks and target VOIs using MR images, but also successfully generated target VOIs even when using PET images alone. In addition, both mean counts and SUVRs obtained in the target VOIs (i.e., cortex, hippocampus, striatum, thalamus, and cerebellum) showed significant concordance correlation (CCC > 0.97, P<0.001) in both methods using MR or using only PET.
Conclusion We propose an unified deep CNN-based model that can generate mouse brain parenchymal masks and inversely-normalized VOI (iVOI) templates in individual brain spaces without extensive efforts for skull-stripping and spatial normalization. We further implemented a new deep CNN-based model based on PET images only to generate iVOI templates. Our devised methods demonstrated the concordant quantification results as compared with the ground-truth method, i.e., spatial normalization and VOI template-based quantification in terms of mean counts and SUVRs in target VOIs. In conclusion, we established a novel deep leaning-based method for MR template-based VOI generation in an SN-less fashion for PET image quantification.
Author(s)
서승연
Issued Date
2022
Awarded Date
2022-08
Type
dissertation
Keyword
mouse braindeep convolutional-neural-network (CNN)inverse-spatial-normalization (iSNtemplate- based volume of interest (VOI)
URI
https://oak.ulsan.ac.kr/handle/2021.oak/9882
http://ulsan.dcollection.net/common/orgView/200000640677
Alternative Author(s)
Seung Yeon Seo
Affiliation
울산대학교
Department
일반대학원 의과학과 의공학전공
Advisor
주세경
오정수
Degree
Master
Publisher
울산대학교 일반대학원 의과학과 의공학전공
Language
eng
Rights
울산대학교 논문은 저작권에 의해 보호 받습니다.
Appears in Collections:
Medical Engineering > 1. Theses(Master)
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.