netket.optimizer.Momentum¶
-
netket.optimizer.
Momentum
(learning_rate, beta=0.9, nesterov=False)[source]¶ Momentum-based Optimizer. The momentum update incorporates an exponentially weighted moving average over previous gradients to speed up descent Qian, N. (1999). The momentum vector \(\mathbf{m}\) is initialized to zero. Given a stochastic estimate of the gradient of the cost function \(G(\mathbf{p})\), the updates for the parameter \(p_k\) and corresponding component of the momentum \(m_k\) are
\[\begin{split}m^\prime_k &= \beta m_k + (1-\beta)G_k(\mathbf{p})\\\end{split}\]p^prime_k &= eta m^prime_k
- Parameters
Examples
Momentum optimizer.
>>> from netket.optimizer import Momentum >>> op = Momentum(learning_rate=0.01)