#### Variational

AdaDelta Optimizer. Like RMSProp, AdaDelta corrects the monotonic decay of learning rates associated with AdaGrad, while additionally eliminating the need to choose a global learning rate $\eta$. The NetKet naming convention of the parameters strictly follows the one introduced in the original paper; here $E[g^2]$ is equivalent to the vector $\mathbf{s}$ from RMSProp. $E[g^2]$ and $E[\Delta x^2]$ are initialized as zero vectors.

## Class Constructor

Constructs a new AdaDelta optimizer.

Argument Type Description
rho float=0.95 Exponential decay rate, in [0,1].
epscut float=1e-07 Small $\epsilon$ cutoff.

### Examples

>>> from netket.optimizer import AdaDelta