AdaDelta

AdaDelta Optimizer. Like RMSProp, AdaDelta corrects the monotonic decay of learning rates associated with AdaGrad, while additionally eliminating the need to choose a global learning rate . The NetKet naming convention of the parameters strictly follows the one introduced in the original paper; here is equivalent to the vector from RMSProp. and are initialized as zero vectors.

Class Constructor

Constructs a new AdaDelta optimizer.

Argument Type Description
rho float=0.95 Exponential decay rate, in [0,1].
epscut float=1e-07 Small cutoff.

Examples

Simple AdaDelta optimizer.

>>> from netket.optimizer import AdaDelta
>>> op = AdaDelta()

Class Methods

reset

Member function resetting the internal state of the optimizer.