J1-J2 model

While using NetKet as a research tool, it will certainly be the case that the specific Hamiltonian you want to simulate is not one of the built-in options that NetKet offers. This tutorial shows how to implement a model Hamiltonian just using the python input, and uses the frustrated model as an example:

where the first term runs over nearest neighbors and the second sum runs on next-to-nearest neighbors.

In Tutorials/J1J2/ this model is studied in the case of a one-dimensional lattice with periodic boundary conditions.

Input file

The Python script j1j2.py can be used to set up the JSON input file for the NetKet executable. In the following we go through this script step by step, explaining the several fields.

Defining the Hilbert space

Since we are dealing with a custom Hamiltonian, we first have to tell NetKet what kind of quantum system we are dealing with. This is done specifying the local Hilbert space:

pars['Hilbert']={
    'Name'           : 'Spin',
    'S'              : 0.5,
    'TotalSz'        : 0,
    'Nspins'         : L,
}

Here we are specifying the fact that we have Spin (the S field) particles, and that we have of them. Also, we fix here the total value of to zero.

Arbitrary, finite-dimensional local Hilbert spaces can be specified in NetKet, as explained here.

Defining the Hilbert space

Next, we have to specify the local operators that enter our Hamiltonian. The Heisenberg term can be decomposed into a interaction, and an exchange term. The part of the Hamiltonian has the form:

and it is the tensor product of the operator on two sites, and . In Python, we can use NumPy to form the tensor product, and obtain the matrix corresponding to this interaction term:

sigmaz=[[1,0],[0,-1]]
mszsz=(np.kron(sigmaz,sigmaz))

This matrix, as expected, is diagonal and acts on the basis vectors:

The exchange term can also be formed as a tensor product of local operators. We leave as an exercise to show that it takes the form of the matrix

which flips two spins only if they are anti-parallel, and is easily written in Python:

exchange=np.asarray([[0,0,0,0],[0,0,2,0],[0,2,0,0],[0,0,0,0]])

Finally, we assemble all the pieces of the Hamiltonian together, further specifying the interaction strenghts (here and on which pair of sites our operators act:

#Couplings J1 and J2
J=[1,0.4]

L=20

operators=[]
sites=[]
for i in range(L):

    for d in [0,1]:
        #\sum_i J*sigma^z(i)*sigma^z(i+d)
        operators.append((J[d]*mszsz).tolist())
        sites.append([i,(i+d+1)%L])

        #\sum_i J*(sigma^x(i)*sigma^x(i+d) + sigma^y(i)*sigma^y(i+d))
        operators.append(((-1.)**(d+1)*J[d]*exchange).tolist())
        sites.append([i,(i+d+1)%L])

Notice that here operators and sites are both Python lists (and that is why we convert NumPy arrays into lists, using tolist()). Finally, we tell NetKet that those are the operators defining the Hamiltonian, and those are the sites they act on:

pars['Hamiltonian']={
    'Operators'      : operators,
    'ActingOn'       : sites,
}

Further details on how custom Hamiltonians can be found here.

Defining the Machine

In this section of the input we specify what wave function ansatz we wish to use. Here, we take a Restricted Boltzmann Machine RbmSpin with spin hidden units (see Ref. 1 for further details). Since we are working with a custom Hamiltonian, translation symmetry cannot be directly used at this time. To further use this machine we must also specify the number of hidden units we want to have. In this machine we also must set Alpha, where , as done in the example input.

pars['Machine']={
    'Name'           : 'RbmSpin',
    'Alpha'          : 1.0,
}

Further details about the Restricted Boltzmann Machines and the other machines implemented in NetKet can be found here.

Defining the Sampling scheme

Another crucial ingredient for the learning part is the Markov-Chain Monte Carlo scheme used for sampling. Here, we consider a Metropolis sampler implementing Hamiltonian moves with parallel tempering (see here for a description of this specific family of sampler).

pars['Sampler']={
    'Name'           : 'MetropolisHamiltonianPt',
    'Nreplicas'      : 16,
}

The first important reason to chose this sampler in this case is that we want to make sure to preserve all the symmetries of the Hamiltonian during the sampling. Basically, what the sampler does in this case is that it choses a pair of first and second neighbors spins at random and proposes an exchange. This is crucial for example if we want our specification 'TotalSz' : 0 to be verified. If instead of Hamiltonian moves we chose local Metropolis moves, during the sampling our total magnetization would fluctuate, thus violating the wanted constraint.

The second reason to chose this sampler is that parallel tempering can be particularly beneficial for highly constrained/frustrated Hamiltonians.

Defining the Learning scheme

Finally, we must specify what learning algorithm we wish to use. Together with the choice of the machine, this is the most important part of the simulation. The method of choice in NetKet is the Stochastic Reconfiguration Sr, developed by S. Sorella and coworkers. For an introduction to this method, you can have a look at the book (2). The code snippet defining the learning methods is:

pars['Learning']={
    'Method'         : 'Sr',
    'Nsamples'       : 1.0e3,
    'NiterOpt'       : 10000,
    'Diagshift'      : 0.1,
    'UseIterative'   : True,
    'OutputFile'     : "test",
}

Also, notice that we need to specify an optimizer. In this case we use a simple Stochastic Gradient Descent (Sgd), specifying the following section of the input:

pars['Optimizer']={
    'Name'           : 'Sgd',
    'LearningRate'   : 0.01,
]

More details about the optimizers can be found here, whereas learning algorithms to find the ground state are discussed here.

Running the simulation

Once you have finished preparing the input file in python, you can just run:

python j1j2.py

this will generate a JSON file called j1j2.json ready to be fed to the NetKet executable. At this point then you can just run

netket j1j2.json

if you want to run your simulation on a single core, or

mpirun -n NP netket j1j2.json

if you want to run your simulation on NP cores (changes NP to the number of cores you want to use).

At this point, the simulation will be running and log files will be generated in real time, until NetKet finishes its tasks.

Output files

Since in the Learning section we have specified 'OutputFile' : "test", two output files will be generated with the “test” prefix, i.e. test.log, a JSON file containing the results of the learning procedure as it advances, and test.wf containing backups of the optimized wave function.

For each iteration of the learning, the output log contains important information which can visually inspected just opening the file.

"Energy":{"Mean":-35.627084266234725,"Sigma":0.005236470739979945,"Taucorr":0.016224299969381108}

For example, you can see here that we have the expectation value of the energy (Mean), its statistical error (Sigma), and an estimate of the autocorrelation time (Taucorr). Apart from the Energy, the learning algorithm also records the EnergyVariance, namely which is smaller and smaller when converging to an exact eigenstate of the Hamiltonian.

If you want, you can also plot these results while the learning is running, just using the convenience script:

python plot_j1j2.py

An example result is shown below, where you can see that the energy would converge to the exact result during the learning.

Responsive image


References


  1. Carleo, G., & Troyer, M. (2017). Solving the quantum many-body problem with artificial neural networks. Science, 355 602
  2. Becca, F., & Sorella, S. (2017). Quantum Monte Carlo Approaches for Correlated Systems. Cambridge University Press.