Momentum

Momentum-based Optimizer. The momentum update incorporates an exponentially weighted moving average over previous gradients to speed up descent Qian, N. (1999). The momentum vector is initialized to zero. Given a stochastic estimate of the gradient of the cost function , the updates for the parameter and corresponding component of the momentum are

Class Constructor

Constructs a new Momentum optimizer.

Argument Type Description
learning_rate float=0.001 The learning rate
beta float=0.9 Momentum exponential decay rate, should be in [0,1].

Examples

Momentum optimizer.

>>> from netket.optimizer import Momentum
>>> op = Momentum(learning_rate=0.01)

Class Methods

reset

Member function resetting the internal state of the optimizer.