Extension and optimization of the perceptron convergence algorithm
- Abstract
- Artificial neural networks have been used in diverse area and played an important role. However there are a few results on mathematical analysis of neural networks. Especially, as far as we know, there is no theoretical approach to construct the order of training data for accelerating the convergence speed of neural network algorithms.\\
For the construction, we consider the single layer perceptron convergence algorithm and make new convergence algorithms for different structures of the perceptron as well as their convergence proofs.\\
We present the order of training data for the acceleration of convergence speed based on the convergence proof. Finally, we provide numerical examples of
our extended convergence theorems and the order of training data.
- Author(s)
- 알모마니 레잇 모하마드 이사
- Issued Date
- 2021
- Awarded Date
- 2021-08
- Type
- Dissertation
- URI
- https://oak.ulsan.ac.kr/handle/2021.oak/5710
http://ulsan.dcollection.net/common/orgView/200000501538
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.