|
tiny_dnn 1.0.0
A header only, dependency-free deep learning framework in C++11
|
adaptive gradient method More...
#include <optimizer.h>


Public Member Functions | |
| void | update (const vec_t &dW, vec_t &W, bool parallelize) |
Public Member Functions inherited from tiny_dnn::stateful_optimizer< 1 > | |
| void | reset () override |
Public Member Functions inherited from tiny_dnn::optimizer | |
| optimizer (const optimizer &)=default | |
| optimizer (optimizer &&)=default | |
| optimizer & | operator= (const optimizer &)=default |
| optimizer & | operator= (optimizer &&)=default |
Public Attributes | |
| float_t | alpha |
Additional Inherited Members | |
Protected Member Functions inherited from tiny_dnn::stateful_optimizer< 1 > | |
| vec_t & | get (const vec_t &key) |
Protected Attributes inherited from tiny_dnn::stateful_optimizer< 1 > | |
| std::unordered_map< const vec_t *, vec_t > | E_ [N] |
adaptive gradient method
J Duchi, E Hazan and Y Singer, Adaptive subgradient methods for online learning and stochastic optimization The Journal of Machine Learning Research, pages 2121-2159, 2011.
Implements tiny_dnn::optimizer.