KLI

Extension and optimization of the perceptron convergence algorithm

Metadata Downloads
Abstract
Artificial neural networks have been used in diverse area and played an important role. However there are a few results on mathematical analysis of neural networks. Especially, as far as we know, there is no theoretical approach to construct the order of training data for accelerating the convergence speed of neural network algorithms.\\

For the construction, we consider the single layer perceptron convergence algorithm and make new convergence algorithms for different structures of the perceptron as well as their convergence proofs.\\

We present the order of training data for the acceleration of convergence speed based on the convergence proof. Finally, we provide numerical examples of
our extended convergence theorems and the order of training data.
Author(s)
알모마니 레잇 모하마드 이사
Issued Date
2021
Awarded Date
2021-08
Type
Dissertation
URI
https://oak.ulsan.ac.kr/handle/2021.oak/5710
http://ulsan.dcollection.net/common/orgView/200000501538
Alternative Author(s)
ALMOMANI LAITH MOHAMMAD ISSA
Affiliation
울산대학교
Department
일반대학원 수학과
Advisor
Prof. Sang-Mok Choo
Degree
Doctor
Publisher
울산대학교 일반대학원 수학과
Language
eng
Appears in Collections:
Mathematics > 2. Theses (Ph.D)
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.