# Momentum

Momentum-based Optimizer. The momentum update incorporates an exponentially weighted moving average over previous gradients to speed up descent Qian, N. (1999). The momentum vector $\mathbf{m}$ is initialized to zero. Given a stochastic estimate of the gradient of the cost function $G(\mathbf{p})$, the updates for the parameter $p_k$ and corresponding component of the momentum $m_k$ are

## Class Constructor

Constructs a new Momentum optimizer.

Argument Type Description
learning_rate float=0.001 The learning rate $\eta$
beta float=0.9 Momentum exponential decay rate, should be in [0,1].

### Examples

Momentum optimizer.

>>> from netket.optimizer import Momentum
>>> op = Momentum(learning_rate=0.01)



## Class Methods

### reset

Member function resetting the internal state of the optimizer.