# netket.optimizer.Sgd¶

class netket.optimizer.Sgd

Simple Stochastic Gradient Descent Optimizer. The Stochastic Gradient Descent is one of the most popular optimizers in machine learning applications. Given a stochastic estimate of the gradient of the cost function ($$G(\mathbf{p})$$), it performs the update:

$p^\prime_k = p_k -\eta G_k(\mathbf{p}),$

where $$\eta$$ is the so-called learning rate. NetKet also implements two extensions to the simple SGD, the first one is $$L_2$$ regularization, and the second one is the possibility to set a decay factor $$\gamma \leq 1$$ for the learning rate, such that at iteration $$n$$ the learning rate is $$\eta \gamma^n$$.

__init__(self: netket._C_netket.optimizer.Sgd, learning_rate: float, l2_reg: float = 0, decay_factor: float = 1.0) → None

Constructs a new Sgd optimizer.

Parameters
• learning_rate – The learning rate $$\eta$$

• l2_reg – The amount of $$L_2$$ regularization.

• decay_factor – The decay factor $$\gamma$$.

Examples

Simple SGD optimizer.

>>> from netket.optimizer import Sgd
>>> op = Sgd(learning_rate=0.05)


Methods

 __init__(self, learning_rate, l2_reg, …) Constructs a new Sgd optimizer. init(self, arg0, arg1) reset(self) Member function resetting the internal state of the optimizer. update(*args, **kwargs) Overloaded function.
init(self: netket._C_netket.optimizer.Optimizer, arg0: int, arg1: bool) → None
reset(self: netket._C_netket.optimizer.Optimizer) → None

Member function resetting the internal state of the optimizer.

update(*args, **kwargs)