netket.optimizer.RmsProp

class netket.optimizer.RmsProp

RMSProp optimizer. RMSProp is a well-known update algorithm proposed by Geoff Hinton in his Neural Networks course notes Neural Networks course notes. It corrects the problem with AdaGrad by using an exponentially weighted moving average over past squared gradients instead of a cumulative sum. After initializing the vector \(\mathbf{s}\) to zero, \(s_k\) and t he parameters \(p_k\) are updated as

\[\begin{split}s^\prime_k = \beta s_k + (1-\beta) G_k(\mathbf{p})^2 \\ p^\prime_k = p_k - \frac{\eta}{\sqrt{s_k}+\epsilon} G_k(\mathbf{p})\end{split}\]
__init__(self: netket._C_netket.optimizer.RmsProp, learning_rate: float = 0.001, beta: float = 0.9, epscut: float = 1e-07) → None

Constructs a new RmsProp optimizer.

Parameters
  • learning_rate – The learning rate \(\eta\)

  • beta – Exponential decay rate.

  • epscut – Small cutoff value.

Examples

RmsProp optimizer.

>>> from netket.optimizer import RmsProp
>>> op = RmsProp(learning_rate=0.02)

Methods

__init__(self, learning_rate, beta, epscut)

Constructs a new RmsProp optimizer.

init(self, arg0, arg1)

reset(self)

Member function resetting the internal state of the optimizer.

update(*args, **kwargs)

Overloaded function.

init(self: netket._C_netket.optimizer.Optimizer, arg0: int, arg1: bool) → None
reset(self: netket._C_netket.optimizer.Optimizer) → None

Member function resetting the internal state of the optimizer.

update(*args, **kwargs)

Overloaded function.

  1. update(self: netket._C_netket.optimizer.Optimizer, grad: numpy.ndarray[float64[m, 1]], param: numpy.ndarray[float64[m, 1], flags.writeable]) -> None

Update param by applying a gradient-based optimization step using grad.

  1. update(self: netket._C_netket.optimizer.Optimizer, grad: numpy.ndarray[complex128[m, 1]], param: numpy.ndarray[complex128[m, 1], flags.writeable]) -> None

Update param by applying a gradient-based optimization step using grad.