A-SDLM: an asynchronous Stochastic Learning Algorithm for fast distributed learning
We propose an asynchronous version of stochastic secondorder optimization algorithm for parallel distributed learning. Our proposed algorithm, namely Asynchronous Stochastic Diagonal Levenberg-Marquardt (A-SDLM) contains only a single hyper-parameter (i.e. the learning rate) while still retaining it...
Saved in:
Main Authors: | Hani, M. K., Liew, S. S. |
---|---|
格式: | Conference or Workshop Item |
出版: |
2015
|
主題: | |
在線閱讀: | http://eprints.utm.my/id/eprint/59161/ |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Distributed B-SDLM: accelerating the training convergence of deep neural networks through parallelism
由: Liew, S. S., et al.
出版: (2016) -
An optimized second order stochastic learning algorithm for neural network training
由: Liew, S. S., et al.
出版: (2016) -
An Asynchronous Distributed Dynamic Channel Assignment Scheme for Dense WLANs
由: Drieberg , Micheal, et al.
出版: (2008) -
Advances in Particle Swarm Algorithms in Asynchronous, Discrete and Multi-Objective Optimization
由: Zuwairie, Ibrahim
出版: (2014) -
Fast and efficient sequential learning algorithms using direct-link RBF networks
由: Asirvadam , Vijanth Sagayan, et al.
出版: (2003)