A-SDLM: an asynchronous Stochastic Learning Algorithm for fast distributed learning
We propose an asynchronous version of stochastic secondorder optimization algorithm for parallel distributed learning. Our proposed algorithm, namely Asynchronous Stochastic Diagonal Levenberg-Marquardt (A-SDLM) contains only a single hyper-parameter (i.e. the learning rate) while still retaining it...
保存先:
主要な著者: | Hani, M. K., Liew, S. S. |
---|---|
フォーマット: | Conference or Workshop Item |
出版事項: |
2015
|
主題: | |
オンライン・アクセス: | http://eprints.utm.my/id/eprint/59161/ |
タグ: |
タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
|
類似資料
-
Distributed B-SDLM: accelerating the training convergence of deep neural networks through parallelism
著者:: Liew, S. S., 等
出版事項: (2016) -
An optimized second order stochastic learning algorithm for neural network training
著者:: Liew, S. S., 等
出版事項: (2016) -
An Asynchronous Distributed Dynamic Channel Assignment Scheme for Dense WLANs
著者:: Drieberg , Micheal, 等
出版事項: (2008) -
Advances in Particle Swarm Algorithms in Asynchronous, Discrete and Multi-Objective Optimization
著者:: Zuwairie, Ibrahim
出版事項: (2014) -
Fast and efficient sequential learning algorithms using direct-link RBF networks
著者:: Asirvadam , Vijanth Sagayan, 等
出版事項: (2003)