# Using JAX as a backend in NetKet - Feature Preview for v3.0¶

In this tutorial we will show how differentiable functions (for example deep networks) written in JAX can be used as variational quantum states in NetKet.

This feature will be available in the upcoming major release (version 3.0). While version 3.0 is still in beta development, users can already try this feature.

## Prerequisites¶

To try out integration with JAX, you first need to fetch the beta version of NetKet v3 We recommend using a virtual environment (either a python environment or a conda environment), for example

```
python3 -m venv nk_env
source nk_env/bin/activate
pip install --pre -U netket
```

## Defining the quantum system¶

NetKet allows for full flexibility in defining quantum systems, for example when tackling a ground-state search problem. While there are a few pre-defined hamiltonians, it is relatively straightforward to implement new quantum operators/ Hamiltonians.

In the following, we consider the case of a transverse-field Ising model defined on a graph with random edges.

```
[1]:
```

```
import netket as nk
#Define a random graph
n_nodes=10
n_edges=20
from numpy.random import choice
rand_edges=[choice(n_nodes, size=2,replace=False).tolist() for i in range(n_edges)]
graph=nk.graph.Graph(nodes=[i for i in range(n_nodes)], edges=rand_edges)
#Define the local hilbert space
hi=nk.hilbert.Spin(s=0.5)**graph.n_nodes
#Define the Hamiltonian as a sum of local operators
from netket.operator import LocalOperator as Op
# Pauli Matrices
sx = [[0, 1], [1, 0]]
sz = [[1, 0], [0, -1]]
# Defining the Hamiltonian as a LocalOperator acting on the given Hilbert space
ha = Op(hi)
#Adding a transverse field term on each node of the graph
for i in range(graph.n_nodes):
ha += Op(hi, sx, [i])
#Adding nearest-neighbors interactions on the edges of the given graph
from numpy import kron
J=0.5
for edge in graph.edges():
ha += J*Op(hi, kron(sz, sz), edge)
```

```
[ ]:
```

```
```

## Defining a JAX module to be used as a wave function¶

We now want to define a suitable JAX wave function to be used as a wave function ansatz. To simplify the discusssion, we consider here a simple single-layer fully connected network with complex weights and a \(tanh\) activation function. These are easy to define in JAX, using for example a model built with STAX. The only requirement is that these networks take as inputs JAX arrays of shape `(batch_size,n)`

, where batch_size
is an arbitrary `batch size`

and `n`

is the number of quantum degrees of freedom (for example, the number of spins, in the previous example). Notice that regardless of the dimensionality of the problem, the last dimension is always flattened into a single index.

```
[2]:
```

```
import jax
from jax.experimental import stax
#We define a custom layer that performs the sum of its inputs
def SumLayer():
def init_fun(rng, input_shape):
output_shape = (-1, 1)
return output_shape, ()
def apply_fun(params, inputs, **kwargs):
return inputs.sum(axis=-1)
return init_fun, apply_fun
#We construct a fully connected network with tanh activation
model=stax.serial(stax.Dense(2 * graph.n_nodes, W_init=nk.nn.initializers.normal(stddev=0.1, dtype=complex),
b_init=nk.nn.initializers.normal(stddev=0.1, dtype=complex)),
stax.Tanh,SumLayer())
# Alternatively, we could have used flax, which would have been easier:
#class Model(nk.nn.Module):
# @nk.nn.compact
# def __call__(self, x):
# x = nk.nn.Dense(features=2*x.shape[-1], dtype=complex, kernel_init=nk.nn.initializers.normal(stddev=0.01), bias_init=nk.nn.initializers.normal(stddev=0.01))(x)
# x = jax.numpy.tanh(x)
# return jax.numpy.sum(x, axis=-1)
#model = Model()
# Alternatively #2 we could have used the built in RBM model:
#model = nk.models.RBM(alpha=2, use_visible_bias=False, dtype=np.complex128)
```

## Train the neural network to find an approximate ground state¶

In order to perform Variational Monte Carlo, we further need to specify a suitable sampler (to compute expectation values over the variational state) as well as an optimizer. In the following we will adopt the Stochatic Gradient Descent coupled with quantum natural gradients (this scheme is known in the VMC literature as Stochastic Reconfiguration)

```
[3]:
```

```
# Defining a sampler that performs local moves
# NetKet automatically dispatches here to MCMC sampler written using JAX types
sa = nk.sampler.MetropolisLocal(hilbert=hi, n_chains=2)
```

```
[4]:
```

```
# Construct the variational state
vs = nk.variational.MCState(sa, model, n_samples=1000)
```

```
[5]:
```

```
# Using Sgd
# Also dispatching to JAX optimizer
op = nk.optimizer.Sgd(learning_rate=0.02)
# Using Stochastic Reconfiguration a.k.a. quantum natural gradient
# Also dispatching to a pure JAX version
sr = nk.optimizer.SR(diag_shift=0.1)
# Create the Variational Monte Carlo instance to learn the ground state
vmc = nk.Vmc(
hamiltonian=ha, optimizer=op, variational_state=vs, sr=sr
)
```

```
<ipython-input-5-2d9f06ce1f1d>:10: UserWarning: Vmc has been renamed to VMC. The old name is
now deprecated and will be removed in the next minor version.
Please updaate your code.
vmc = nk.Vmc(
```

```
[ ]:
```

```
```

### Running the training loop¶

The last version of NetKet also allows for a finer control of the vmc loop. In the simplest case, one can just iterate through the vmc object and print the current value of the energy. More sophisticated output schemes based on tensorboard have been also implemented, but are not discussed in this Tutorial.

```
[7]:
```

```
# Running the learning loop and printing the energy every 50 steps
# [notice that the very first iteration is slow because of JIT compilation]
for it in vmc.iter(500,50):
print(it,vmc.energy)
```

```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-90e763cb2b85> in <module>
1 # Running the learning loop and printing the energy every 50 steps
2 # [notice that the very first iteration is slow because of JIT compilation]
----> 3 for it in vmc.iter(500,50):
4 print(it,vmc.energy)
~/Dropbox/Ricerca/Codes/Python/netket/netket/drivers/abstract_variational_driver.py in iter(self, n_steps, step)
161 for _ in range(0, n_steps, step):
162 for i in range(0, step):
--> 163 dp = self._forward_and_backward()
164 if i == 0:
165 yield self.step_count
~/Dropbox/Ricerca/Codes/Python/netket/netket/drivers/vmc.py in _forward_and_backward(self)
84
85 # Compute the local energy estimator and average Energy
---> 86 self._loss_stats, self._loss_grad = self.state.expect_and_grad(self._ham)
87
88 if self.sr is not None:
~/Dropbox/Ricerca/Codes/Python/netket/netket/variational/mc_state.py in expect_and_grad(self, Ô, mutable, is_hermitian)
429
430 σ = self.samples
--> 431 σp, mels = Ô.get_conn_padded(np.asarray(σ.reshape((-1, σ.shape[-1]))))
432
433 if is_hermitian:
~/Dropbox/Ricerca/Codes/Python/netket/netket/operator/_abstract_operator.py in get_conn_padded(self, x)
128 """
129 sections = np.empty(x.shape[0], dtype=np.int32)
--> 130 x_primes, mels = self.get_conn_flattened(x, sections, pad=True)
131
132 n_primes = sections[0]
TypeError: cannot unpack non-iterable NoneType object
```

## Comparing to exact diagonalization¶

Since this is a relatively small quantum system, we can still diagonalize the Hamiltonian using exact diagonalization. For this purpose, NetKet conveniently exposes a `.to_sparse`

method that just converts the Hamiltonian into a `scipy`

sparse matrix. Here we first obtain this sparse matrix, and then diagonalize it with scipy builtins.

```
[7]:
```

```
import scipy
exact_ens=scipy.sparse.linalg.eigsh(ha.to_sparse(),k=1,which='SA',return_eigenvectors=False)
print("Exact energy is : ",exact_ens[0])
print("Relative error is : ", (abs((vmc.energy.mean-exact_ens[0])/exact_ens[0])))
```

```
Exact energy is : -11.302079317764267
Relative error is : 0.0017295387652704216
```