netket.optimizer.Momentum

class netket.optimizer.Momentum

Momentum-based Optimizer. The momentum update incorporates an exponentially weighted moving average over previous gradients to speed up descent Qian, N. (1999). The momentum vector \(\mathbf{m}\) is initialized to zero. Given a stochastic estimate of the gradient of the cost function \(G(\mathbf{p})\), the updates for the parameter \(p_k\) and corresponding component of the momentum \(m_k\) are

\[\begin{split}m^\prime_k &= \beta m_k + (1-\beta)G_k(\mathbf{p})\\ p^\prime_k &= \eta m^\prime_k\end{split}\]
__init__(self: netket._C_netket.optimizer.Momentum, learning_rate: float = 0.001, beta: float = 0.9) → None

Constructs a new Momentum optimizer.

Parameters
  • learning_rate – The learning rate \(\eta\)

  • beta – Momentum exponential decay rate, should be in [0,1].

Examples

Momentum optimizer.

>>> from netket.optimizer import Momentum
>>> op = Momentum(learning_rate=0.01)

Methods

__init__(self, learning_rate, beta)

Constructs a new Momentum optimizer.

init(self, arg0, arg1)

reset(self)

Member function resetting the internal state of the optimizer.

update(*args, **kwargs)

Overloaded function.

init(self: netket._C_netket.optimizer.Optimizer, arg0: int, arg1: bool) → None
reset(self: netket._C_netket.optimizer.Optimizer) → None

Member function resetting the internal state of the optimizer.

update(*args, **kwargs)

Overloaded function.

  1. update(self: netket._C_netket.optimizer.Optimizer, grad: numpy.ndarray[float64[m, 1]], param: numpy.ndarray[float64[m, 1], flags.writeable]) -> None

Update param by applying a gradient-based optimization step using grad.

  1. update(self: netket._C_netket.optimizer.Optimizer, grad: numpy.ndarray[complex128[m, 1]], param: numpy.ndarray[complex128[m, 1], flags.writeable]) -> None

Update param by applying a gradient-based optimization step using grad.