This website contains the QMC tutorial of the 2021 LTTC winter school Tutorials in Theoretical Chemistry.
We propose different exercises to understand quantum Monte Carlo (QMC) methods. In the first section, we start with the computation of the energy of a hydrogen atom using numerical integration. The goal of this section is to familarize yourself with the concept of local energy. Then, we introduce the variational Monte Carlo (VMC) method which computes a statistical estimate of the expectation value of the energy associated with a given wave function, and apply this approach to the hydrogen atom. Finally, we present the diffusion Monte Carlo (DMC) method which we use here to estimate the exact energy of the hydrogen atom and of the H_2 molecule, starting from an approximate wave function.
Code examples will be given in Python3 and Fortran. You can use whatever language you prefer to write the programs.
We consider the stationary solution of the Schrödinger equation, so
the wave functions considered here are real: for an
All the quantities are expressed in atomic units (energies, coordinates, etc).
For a given system with Hamiltonian
where
The electronic energy of a system,
\begin{eqnarray*}
E & = & \frac{\langle Ψ| \hat{H} | Ψ\rangle}{\langle Ψ |Ψ \rangle}
= \frac{∫ Ψ(\mathbf{r})\, \hat{H} Ψ(\mathbf{r})\, d\mathbf{r}}{∫ |Ψ(\mathbf{r}) |^2 d\mathbf{r}}
& = & \frac{∫ |Ψ(\mathbf{r})|^2\, \frac{\hat{H} Ψ(\mathbf{r})}{Ψ(\mathbf{r})}\,d\mathbf{r}}{∫ |Ψ(\mathbf{r}) |^2 d\mathbf{r}}
= \frac{∫ |Ψ(\mathbf{r})|^2\, E_L(\mathbf{r})\,d\mathbf{r}}{∫ |Ψ(\mathbf{r}) |^2 d\mathbf{r}}
\end{eqnarray*}
For few dimensions, one can easily compute
To this aim, recall that the probabilistic expected value of an
arbitrary function
where a probability density function
Similarly, we can view the the energy of a system,
where the probability density is given by the square of the wave function:
If we can sample $N\rm MC$ configurations
In this section, we consider the hydrogen atom with the following wave function:
We will first verify that, for a particular value of
To do that, we will compute the local energy and check whether it is constant.
You will now program all quantities needed to compute the local energy of the H atom for the given wave function.
Write all the functions of this section in a single file :
hydrogen.py
if you use Python, or hydrogen.f90
is you use
Fortran.
$\mathbf{r}=\left( \begin{array}{c} x \ y\ z\end{array} \right)$, so $$ V(\mathbf{r}) = -\frac{1}{\sqrt{x^2 + y^2 + z^2}} $$
Python
#!/usr/bin/env python3
import numpy as np
def potential(r):
# TODO
Fortran
double precision function potential(r)
implicit none
double precision, intent(in) :: r(3)
! TODO
end function potential
Python
#!/usr/bin/env python3
import numpy as np
def potential(r):
distance = np.sqrt(np.dot(r,r))
assert (distance > 0)
return -1. / distance
Fortran
double precision function potential(r)
implicit none
double precision, intent(in) :: r(3)
double precision :: distance
distance = dsqrt( r(1)*r(1) + r(2)*r(2) + r(3)*r(3) )
if (distance > 0.d0) then
potential = -1.d0 / distance
else
stop 'potential at r=0.d0 diverges'
end if
end function potential
Python
def psi(a, r):
# TODO
Fortran
double precision function psi(a, r)
implicit none
double precision, intent(in) :: a, r(3)
! TODO
end function psi
Python
def psi(a, r):
return np.exp(-a*np.sqrt(np.dot(r,r)))
Fortran
double precision function psi(a, r)
implicit none
double precision, intent(in) :: a, r(3)
psi = dexp(-a * dsqrt( r(1)*r(1) + r(2)*r(2) + r(3)*r(3) ))
end function psi
The local kinetic energy is defined as
We differentiate
\[ Ψ(\mathbf{r}) = exp(-a\,|\mathbf{r}|) \] \[\frac{∂ Ψ}{∂ x} = \frac{∂ Ψ}{∂ |\mathbf{r}|} \frac{∂ |\mathbf{r}|}{∂ x} = - \frac{a\,x}{|\mathbf{r}|} Ψ(\mathbf{r}) \]
and we differentiate a second time:
The Laplacian operator
Therefore, the local kinetic energy is $$ T_L (\mathbf{r}) = -\frac{1}{2}\left(a^2 - \frac{2a}{\mathbf{|r|}} \right) $$
Python
def kinetic(a,r):
# TODO
Fortran
double precision function kinetic(a,r)
implicit none
double precision, intent(in) :: a, r(3)
! TODO
end function kinetic
Python
def kinetic(a,r):
distance = np.sqrt(np.dot(r,r))
assert (distance > 0.)
return a * (1./distance - 0.5 * a)
Fortran
double precision function kinetic(a,r)
implicit none
double precision, intent(in) :: a, r(3)
double precision :: distance
distance = dsqrt( r(1)*r(1) + r(2)*r(2) + r(3)*r(3) )
if (distance > 0.d0) then
kinetic = a * (1.d0 / distance - 0.5d0 * a)
else
stop 'kinetic energy diverges at r=0'
end if
end function kinetic
Python
def e_loc(a,r):
#TODO
Fortran
double precision function e_loc(a,r)
implicit none
double precision, intent(in) :: a, r(3)
double precision, external :: kinetic
double precision, external :: potential
! TODO
end function e_loc
Python
def e_loc(a,r):
return kinetic(a,r) + potential(r)
Fortran
double precision function e_loc(a,r)
implicit none
double precision, intent(in) :: a, r(3)
double precision, external :: kinetic
double precision, external :: potential
e_loc = kinetic(a,r) + potential(r)
end function e_loc
\begin{eqnarray*}
E &=& \frac{\hat{H} Ψ}{Ψ} = - \frac{1}{2} \frac{Δ Ψ}{Ψ} -
\frac{1}{|\mathbf{r}|}
&=& -\frac{1}{2}\left(a^2 - \frac{2a}{\mathbf{|r|}} \right) -
\frac{1}{|\mathbf{r}|} \
&=&
-\frac{1}{2} a^2 + \frac{a-1}{\mathbf{|r|}}
\end{eqnarray*}
The program you will write in this section will be written in
another file (plot_hydrogen.py
or plot_hydrogen.f90
for
example).
It will use the functions previously defined.
In Python, you should put at the beginning of the file
#!/usr/bin/env python3
from hydrogen import e_loc
to be able to use the e_loc
function of the hydrogen.py
file.
In Fortran, you will need to compile all the source files together:
gfortran hydrogen.f90 plot_hydrogen.f90 -o plot_hydrogen
Python
#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
from hydrogen import e_loc
x=np.linspace(-5,5)
plt.figure(figsize=(10,5))
# TODO
plt.tight_layout()
plt.legend()
plt.savefig("plot_py.png")
Fortran
program plot
implicit none
double precision, external :: e_loc
double precision :: x(50), dx
integer :: i, j
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
! TODO
end program plot
To compile and run:
gfortran hydrogen.f90 plot_hydrogen.f90 -o plot_hydrogen
./plot_hydrogen > data
To plot the data using Gnuplot:
set grid
set xrange [-5:5]
set yrange [-2:1]
plot './data' index 0 using 1:2 with lines title 'a=0.1', \
'./data' index 1 using 1:2 with lines title 'a=0.2', \
'./data' index 2 using 1:2 with lines title 'a=0.5', \
'./data' index 3 using 1:2 with lines title 'a=1.0', \
'./data' index 4 using 1:2 with lines title 'a=1.5', \
'./data' index 5 using 1:2 with lines title 'a=2.0'
Python
#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
from hydrogen import e_loc
x=np.linspace(-5,5)
plt.figure(figsize=(10,5))
for a in [0.1, 0.2, 0.5, 1., 1.5, 2.]:
y=np.array([ e_loc(a, np.array([t,0.,0.]) ) for t in x])
plt.plot(x,y,label=f"a={a}")
plt.tight_layout()
plt.legend()
plt.savefig("plot_py.png")
Fortran
program plot
implicit none
double precision, external :: e_loc
double precision :: x(50), energy, dx, r(3), a(6)
integer :: i, j
a = (/ 0.1d0, 0.2d0, 0.5d0, 1.d0, 1.5d0, 2.d0 /)
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
r(:) = 0.d0
do j=1,size(a)
print *, '# a=', a(j)
do i=1,size(x)
r(1) = x(i)
energy = e_loc( a(j), r )
print *, x(i), energy
end do
print *, ''
print *, ''
end do
end program plot
If the space is discretized in small volume elements
Python
#!/usr/bin/env python3
import numpy as np
from hydrogen import e_loc, psi
interval = np.linspace(-5,5,num=50)
delta = (interval[1]-interval[0])**3
r = np.array([0.,0.,0.])
for a in [0.1, 0.2, 0.5, 0.9, 1., 1.5, 2.]:
# TODO
print(f"a = {a} \t E = {E}")
Fortran
program energy_hydrogen
implicit none
double precision, external :: e_loc, psi
double precision :: x(50), w, delta, energy, dx, r(3), a(6), norm
integer :: i, k, l, j
a = (/ 0.1d0, 0.2d0, 0.5d0, 1.d0, 1.5d0, 2.d0 /)
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
do j=1,size(a)
! TODO
print *, 'a = ', a(j), ' E = ', energy
end do
end program energy_hydrogen
To compile the Fortran and run it:
gfortran hydrogen.f90 energy_hydrogen.f90 -o energy_hydrogen
./energy_hydrogen
Python
#!/usr/bin/env python3
import numpy as np
from hydrogen import e_loc, psi
interval = np.linspace(-5,5,num=50)
delta = (interval[1]-interval[0])**3
r = np.array([0.,0.,0.])
for a in [0.1, 0.2, 0.5, 0.9, 1., 1.5, 2.]:
E = 0.
norm = 0.
for x in interval:
r[0] = x
for y in interval:
r[1] = y
for z in interval:
r[2] = z
w = psi(a,r)
w = w * w * delta
E += w * e_loc(a,r)
norm += w
E = E / norm
print(f"a = {a} \t E = {E}")
a = 0.1 E = -0.24518438948809218 a = 0.2 E = -0.26966057967803525 a = 0.5 E = -0.3856357612517407 a = 0.9 E = -0.49435709786716214 a = 1.0 E = -0.5 a = 1.5 E = -0.39242967082602226 a = 2.0 E = -0.08086980667844901
Fortran
program energy_hydrogen
implicit none
double precision, external :: e_loc, psi
double precision :: x(50), w, delta, energy, dx, r(3), a(6), norm
integer :: i, k, l, j
a = (/ 0.1d0, 0.2d0, 0.5d0, 1.d0, 1.5d0, 2.d0 /)
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
delta = dx**3
r(:) = 0.d0
do j=1,size(a)
energy = 0.d0
norm = 0.d0
do i=1,size(x)
r(1) = x(i)
do k=1,size(x)
r(2) = x(k)
do l=1,size(x)
r(3) = x(l)
w = psi(a(j),r)
w = w * w * delta
energy = energy + w * e_loc(a(j), r)
norm = norm + w
end do
end do
end do
energy = energy / norm
print *, 'a = ', a(j), ' E = ', energy
end do
end program energy_hydrogen
a = 0.10000000000000001 E = -0.24518438948809140 a = 0.20000000000000001 E = -0.26966057967803236 a = 0.50000000000000000 E = -0.38563576125173815 a = 1.0000000000000000 E = -0.50000000000000000 a = 1.5000000000000000 E = -0.39242967082602065 a = 2.0000000000000000 E = -8.0869806678448772E-002
The variance of the local energy is a functional of
$$ σ^2(E_L) = \frac{∫ |Ψ(\mathbf{r})|^2\, \left[ E_L(\mathbf{r}) - E \right]^2 \, d\mathbf{r}}{∫ |Ψ(\mathbf{r}) |^2 d\mathbf{r}} $$ which can be simplified as
If the local energy is constant (i.e.
\begin{eqnarray*}
\langle (E - \bar{E})^2 \rangle & = &
\langle E^2 - 2 E \bar{E} + \bar{E}^2 \rangle
&=& \langle E^2 \rangle - 2 \langle E \bar{E} \rangle + \langle \bar{E}^2 \rangle \
&=& \langle E^2 \rangle - 2 \langle E \rangle \bar{E} + \bar{E}^2 \
&=& \langle E^2 \rangle - 2 \bar{E}^2 + \bar{E}^2 \
&=& \langle E^2 \rangle - \bar{E}^2 \
&=& \langle E^2 \rangle - \langle E \rangle^2 \
\end{eqnarray*}
Python
#!/usr/bin/env python3
import numpy as np from hydrogen import e_loc, psi
interval = np.linspace(-5,5,num=50)
delta = (interval[1]-interval[0])**3
r = np.array([0.,0.,0.])
for a in [0.1, 0.2, 0.5, 0.9, 1., 1.5, 2.]:
# TODO
print(f"a = {a} \t E = {E:10.8f} \t \sigma^2 = {s2:10.8f}")
Fortran
program variance_hydrogen
implicit none
double precision :: x(50), w, delta, energy, energy2
double precision :: dx, r(3), a(6), norm, e_tmp, s2
integer :: i, k, l, j
double precision, external :: e_loc, psi
a = (/ 0.1d0, 0.2d0, 0.5d0, 1.d0, 1.5d0, 2.d0 /)
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
do j=1,size(a)
! TODO
print *, 'a = ', a(j), ' E = ', energy
end do
end program variance_hydrogen
To compile and run:
gfortran hydrogen.f90 variance_hydrogen.f90 -o variance_hydrogen
./variance_hydrogen
Python
#!/usr/bin/env python3
import numpy as np
from hydrogen import e_loc, psi
interval = np.linspace(-5,5,num=50)
delta = (interval[1]-interval[0])**3
r = np.array([0.,0.,0.])
for a in [0.1, 0.2, 0.5, 0.9, 1., 1.5, 2.]:
E = 0.
E2 = 0.
norm = 0.
for x in interval:
r[0] = x
for y in interval:
r[1] = y
for z in interval:
r[2] = z
w = psi(a,r)
w = w * w * delta
e_tmp = e_loc(a,r)
E += w * e_tmp
E2 += w * e_tmp * e_tmp
norm += w
E = E / norm
E2 = E2 / norm
s2 = E2 - E**2
print(f"a = {a} \t E = {E:10.8f} \t \sigma^2 = {s2:10.8f}")
a = 0.1 E = -0.24518439 \sigma^2 = 0.02696522 a = 0.2 E = -0.26966058 \sigma^2 = 0.03719707 a = 0.5 E = -0.38563576 \sigma^2 = 0.05318597 a = 0.9 E = -0.49435710 \sigma^2 = 0.00577812 a = 1.0 E = -0.50000000 \sigma^2 = 0.00000000 a = 1.5 E = -0.39242967 \sigma^2 = 0.31449671 a = 2.0 E = -0.08086981 \sigma^2 = 1.80688143
Fortran
program variance_hydrogen
implicit none
double precision :: x(50), w, delta, energy, energy2
double precision :: dx, r(3), a(6), norm, e_tmp, s2
integer :: i, k, l, j
double precision, external :: e_loc, psi
a = (/ 0.1d0, 0.2d0, 0.5d0, 1.d0, 1.5d0, 2.d0 /)
dx = 10.d0/(size(x)-1)
do i=1,size(x)
x(i) = -5.d0 + (i-1)*dx
end do
delta = dx**3
r(:) = 0.d0
do j=1,size(a)
energy = 0.d0
energy2 = 0.d0
norm = 0.d0
do i=1,size(x)
r(1) = x(i)
do k=1,size(x)
r(2) = x(k)
do l=1,size(x)
r(3) = x(l)
w = psi(a(j),r)
w = w * w * delta
e_tmp = e_loc(a(j), r)
energy = energy + w * e_tmp
energy2 = energy2 + w * e_tmp * e_tmp
norm = norm + w
end do
end do
end do
energy = energy / norm
energy2 = energy2 / norm
s2 = energy2 - energy*energy
print *, 'a = ', a(j), ' E = ', energy, ' s2 = ', s2
end do
end program variance_hydrogen
a = 0.10000000000000001 E = -0.24518438948809140 s2 = 2.6965218719722767E-002 a = 0.20000000000000001 E = -0.26966057967803236 s2 = 3.7197072370201284E-002 a = 0.50000000000000000 E = -0.38563576125173815 s2 = 5.3185967578480653E-002 a = 1.0000000000000000 E = -0.50000000000000000 s2 = 0.0000000000000000 a = 1.5000000000000000 E = -0.39242967082602065 s2 = 0.31449670909172917 a = 2.0000000000000000 E = -8.0869806678448772E-002 s2 = 1.8068814270846534
Numerical integration with deterministic methods is very efficient in low dimensions. When the number of dimensions becomes large, instead of computing the average energy as a numerical integration on a grid, it is usually more efficient to use Monte Carlo sampling.
Moreover, Monte Carlo sampling will allow us to remove the bias due to the discretization of space, and compute a statistical confidence interval.
To compute the statistical error, you need to perform
The estimate of the energy is
The variance of the average energies can be computed as
And the confidence interval is given by
Python
#!/usr/bin/env python3
from math import sqrt
def ave_error(arr):
#TODO
return (average, error)
Fortran
subroutine ave_error(x,n,ave,err)
implicit none
integer, intent(in) :: n
double precision, intent(in) :: x(n)
double precision, intent(out) :: ave, err
! TODO
end subroutine ave_error
Python
#!/usr/bin/env python3
from math import sqrt
def ave_error(arr):
M = len(arr)
assert(M>0)
if M == 1:
average = arr[0]
error = 0.
else:
average = sum(arr)/M
variance = 1./(M-1) * sum( [ (x - average)**2 for x in arr ] )
error = sqrt(variance/M)
return (average, error)
Fortran
subroutine ave_error(x,n,ave,err)
implicit none
integer, intent(in) :: n
double precision, intent(in) :: x(n)
double precision, intent(out) :: ave, err
double precision :: variance
if (n < 1) then
stop 'n<1 in ave_error'
else if (n == 1) then
ave = x(1)
err = 0.d0
else
ave = sum(x(:)) / dble(n)
variance = sum((x(:) - ave)**2) / dble(n-1)
err = dsqrt(variance/dble(n))
endif
end subroutine ave_error
We will now perform our first Monte Carlo calculation to compute the energy of the hydrogen atom.
Consider again the expression of the energy
\begin{eqnarray*} E & = & \frac{∫ E_L(\mathbf{r})|Ψ(\mathbf{r})|^2\,d\mathbf{r}}{∫ |Ψ(\mathbf{r}) |^2 d\mathbf{r}}\,. \end{eqnarray*}
Clearly, the square of the wave function is a good choice of probability density to sample but we will start with something simpler and rewrite the energy as
\begin{eqnarray*} E & = & \frac{∫ E_L(\mathbf{r})\frac{|Ψ(\mathbf{r})|^2}{P(\mathbf{r})}P(\mathbf{r})\, \,d\mathbf{r}}{∫ \frac{|Ψ(\mathbf{r})|^2 }{P(\mathbf{r})}P(\mathbf{r})d\mathbf{r}}\,. \end{eqnarray*}
Here, we will sample a uniform probability
and zero outside the cube.
One Monte Carlo run will consist of $N\rm MC$ Monte Carlo iterations. At every Monte Carlo iteration:
- Draw a random point
$\mathbf{r}_i$ in the box$(-5,-5,-5) ≤ (x,y,z) ≤ (5,5,5)$ - Compute
$|Ψ(\mathbf{r}_i)|^2$ and accumulate the result in a variablenormalization
- Compute
$|Ψ(\mathbf{r}_i)|^2 × E_L(\mathbf{r}_i)$ , and accumulate the result in a variableenergy
Once all the iterations have been computed, the run returns the average energy
To compute the statistical error, perform
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a, nmax):
# TODO
a = 1.2
nmax = 100000
#TODO
print(f"E = {E} +/- {deltaE}")
Fortran
subroutine uniform_montecarlo(a,nmax,energy)
implicit none
double precision, intent(in) :: a
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy
integer*8 :: istep
double precision :: norm, r(3), w
double precision, external :: e_loc, psi
! TODO
end subroutine uniform_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns)
double precision :: ave, err
!TODO
print *, 'E = ', ave, '+/-', err
end program qmc
gfortran hydrogen.f90 qmc_stats.f90 qmc_uniform.f90 -o qmc_uniform
./qmc_uniform
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a, nmax):
energy = 0.
normalization = 0.
for istep in range(nmax):
r = np.random.uniform(-5., 5., (3))
w = psi(a,r)
w = w*w
energy += w * e_loc(a,r)
normalization += w
return energy / normalization
a = 1.2
nmax = 100000
X = [MonteCarlo(a,nmax) for i in range(30)]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
E = -0.48363807880008725 +/- 0.002330876047368999
Fortran
subroutine uniform_montecarlo(a,nmax,energy)
implicit none
double precision, intent(in) :: a
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy
integer*8 :: istep
double precision :: norm, r(3), w
double precision, external :: e_loc, psi
energy = 0.d0
norm = 0.d0
do istep = 1,nmax
call random_number(r)
r(:) = -5.d0 + 10.d0*r(:)
w = psi(a,r)
w = w*w
energy = energy + w * e_loc(a,r)
norm = norm + w
end do
energy = energy / norm
end subroutine uniform_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns)
double precision :: ave, err
do irun=1,nruns
call uniform_montecarlo(a, nmax, X(irun))
enddo
call ave_error(X, nruns, ave, err)
print *, 'E = ', ave, '+/-', err
end program qmc
E = -0.48084122147238995 +/- 2.4983775878329355E-003
We will now use the square of the wave function to sample random points distributed with the probability density \[ P(\mathbf{r}) = \frac{|Ψ(\mathbf{r})|^2}{∫ |Ψ(\mathbf{r})|^2 d\mathbf{r}}\,. \]
The expression of the average energy is now simplified as the average of the local energies, since the weights are taken care of by the sampling:
To sample a chosen probability density, an efficient method is the
Metropolis-Hastings sampling algorithm. Starting from a random
initial position
according to the following algorithm.
At every step, we propose a new move according to a transition probability $T(\mathbf{r}n→\mathbf{r}n+1)$ of our choice.
For simplicity, we will move the electron in a 3-dimensional box of side
where
After having moved the electron, we add the
accept/reject step that guarantees that the distribution of the
which, for our choice of transition probability, becomes
Also note that we do not need to compute the norm of the wave function!
The algorithm is summarized as follows:
- Evaluate the local energy at
$\mathbf{r}_n$ and accumulate it - Compute a new position
$\mathbf{r’} = \mathbf{r}_n + δ L\, \mathbf{u}$ - Evaluate
$Ψ(\mathbf{r}’)$ at the new position - Compute the ratio $A = \frac{\left|Ψ(\mathbf{r’})\right|^2}{\left|Ψ(\mathbf{r}n)\right|^2}$
- Draw a uniform random number
$v ∈ [0,1]$ - if
$v ≤ A$ , accept the move : set $\mathbf{r}n+1 = \mathbf{r’}$ - else, reject the move : set $\mathbf{r}n+1 = \mathbf{r}_n$
If the box is infinitely small, the ratio will be very close to one and all the steps will be accepted. However, the moves will be very correlated and you will visit the configurational space very slowly.
On the other hand, if you propose too large moves, the number of accepted steps will decrease because the ratios might become small. If the number of accepted steps is close to zero, then the space is not well sampled either.
The size of the move should be adjusted so that it is as large as possible, keeping the number of accepted steps not too small. To achieve that, we define the acceptance rate as the number of accepted steps over the total number of steps. Adjusting the time step such that the acceptance rate is close to 0.5 is a good compromise for the current problem.
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a,nmax,dt):
# TODO
return energy/nmax, N_accep/nmax
# Run simulation
a = 1.2
nmax = 100000
dt = #TODO
X0 = [ MonteCarlo(a,nmax,dt) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
Fortran
subroutine metropolis_montecarlo(a,nmax,dt,energy,accep)
implicit none
double precision, intent(in) :: a
integer*8 , intent(in) :: nmax
double precision, intent(in) :: dt
double precision, intent(out) :: energy
double precision, intent(out) :: accep
integer*8 :: istep
integer*8 :: n_accep
double precision :: r_old(3), r_new(3), psi_old, psi_new
double precision :: v, ratio
double precision, external :: e_loc, psi, gaussian
! TODO
end subroutine metropolis_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = ! TODO
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), Y(nruns)
double precision :: ave, err
do irun=1,nruns
call metropolis_montecarlo(a,nmax,dt,X(irun),Y(irun))
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(Y,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
gfortran hydrogen.f90 qmc_stats.f90 qmc_metropolis.f90 -o qmc_metropolis
./qmc_metropolis
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a,nmax,dt):
energy = 0.
N_accep = 0
r_old = np.random.uniform(-dt, dt, (3))
psi_old = psi(a,r_old)
for istep in range(nmax):
energy += e_loc(a,r_old)
r_new = r_old + np.random.uniform(-dt,dt,(3))
psi_new = psi(a,r_new)
ratio = (psi_new / psi_old)**2
if np.random.uniform() <= ratio:
N_accep += 1
r_old = r_new
psi_old = psi_new
return energy/nmax, N_accep/nmax
# Run simulation
a = 1.2
nmax = 100000
dt = 1.0
X0 = [ MonteCarlo(a,nmax,dt) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
E = -0.4802595860693983 +/- 0.0005124420418289207 A = 0.5074913333333334 +/- 0.000350889422714878
Fortran
subroutine metropolis_montecarlo(a,nmax,dt,energy,accep)
implicit none
double precision, intent(in) :: a
integer*8 , intent(in) :: nmax
double precision, intent(in) :: dt
double precision, intent(out) :: energy
double precision, intent(out) :: accep
double precision :: r_old(3), r_new(3), psi_old, psi_new
double precision :: v, ratio
integer*8 :: n_accep
integer*8 :: istep
double precision, external :: e_loc, psi, gaussian
energy = 0.d0
n_accep = 0_8
call random_number(r_old)
r_old(:) = dt * (2.d0*r_old(:) - 1.d0)
psi_old = psi(a,r_old)
do istep = 1,nmax
energy = energy + e_loc(a,r_old)
call random_number(r_new)
r_new(:) = r_old(:) + dt*(2.d0*r_new(:) - 1.d0)
psi_new = psi(a,r_new)
ratio = (psi_new / psi_old)**2
call random_number(v)
if (v <= ratio) then
n_accep = n_accep + 1_8
r_old(:) = r_new(:)
psi_old = psi_new
endif
end do
energy = energy / dble(nmax)
accep = dble(n_accep) / dble(nmax)
end subroutine metropolis_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = 1.0d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), Y(nruns)
double precision :: ave, err
do irun=1,nruns
call metropolis_montecarlo(a,nmax,dt,X(irun),Y(irun))
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(Y,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
E = -0.47948142754168033 +/- 4.8410177863919307E-004 A = 0.50762633333333318 +/- 3.4601756760043725E-004
One can use more efficient numerical schemes to move the electrons by choosing a smarter expression for the transition probability.
The Metropolis acceptance step has to be adapted accordingly to ensure that the detailed balance condition is satisfied. This means that
the acceptance probability
\[ A(\mathbf{r}n → \mathbf{r}n+1) = min \left( 1,
\frac{T(\mathbf{r}n+1 → \mathbf{r}n) P(\mathbf{r}n+1)}
{T(\mathbf{r}n → \mathbf{r}n+1) P(\mathbf{r}n)}
\right)
\]
where $T(\mathbf{r}_n → \mathbf{r}n+1)$ is the
probability of transition from
In the previous example, we were using uniform sampling in a box centered at the current position. Hence, the transition probability was symmetric
\[ T(\mathbf{r}n → \mathbf{r}n+1) = T(\mathbf{r}n+1 → \mathbf{r}n) = \text{constant}\,, \]
so the expression of
Now, if instead of drawing uniform random numbers, we
choose to draw Gaussian random numbers with zero mean and variance
\[ T(\mathbf{r}n → \mathbf{r}n+1) = \frac{1}{(2π\,δ t)3/2} exp \left[ - \frac{\left( \mathbf{r}n+1 - \mathbf{r}n \right)^2}{2δ t} \right]\,. \]
Furthermore, to sample the density even better, we can “push” the electrons into in the regions of high probability, and “pull” them away from the low-probability regions. This will increase the acceptance ratios and improve the sampling.
To do this, we can use the gradient of the probability density
\[ \frac{∇ [ Ψ^2 ]}{Ψ^2} = 2 \frac{∇ Ψ}{Ψ}\,, \]
and add the so-called drift vector,
\[ T(\mathbf{r}n → \mathbf{r}n+1) = \frac{1}{(2π\,δ t)3/2} exp \left[ - \frac{\left( \mathbf{r}n+1 - \mathbf{r}n - δ t\frac{∇ Ψ(\mathbf{r}_n)}{Ψ(\mathbf{r}_n)} \right)^2}{2\,δ t} \right]\,. \]
The corresponding move is proposed as
\[ \mathbf{r}n+1 = \mathbf{r}n + δ t\, \frac{∇ Ψ(\mathbf{r})}{Ψ(\mathbf{r})} + χ \,, \]
where
The algorithm of the previous exercise is only slighlty modified as:
- Evaluate the local energy at $\mathbf{r}n$ and accumulate it
- Compute a new position
$\mathbf{r’} = \mathbf{r}_n + δ t\, \frac{∇ Ψ(\mathbf{r})}{Ψ(\mathbf{r})} + χ$ - Evaluate
$Ψ(\mathbf{r}’)$ and$\frac{∇ Ψ(\mathbf{r’})}{Ψ(\mathbf{r’})}$ at the new position - Compute the ratio $A = \frac{T(\mathbf{r}’ → \mathbf{r}n) P(\mathbf{r}’)}{T(\mathbf{r}n → \mathbf{r}’) P(\mathbf{r}n)}$
- Draw a uniform random number
$v ∈ [0,1]$ - if
$v ≤ A$ , accept the move : set $\mathbf{r}n+1 = \mathbf{r’}$ - else, reject the move : set $\mathbf{r}n+1 = \mathbf{r}_n$
To obtain Gaussian-distributed random numbers, you can apply the Box Muller transform to uniform random numbers:
\begin{eqnarray*}
z_1 &=& \sqrt{-2 ln u_1} cos(2 π u_2)
z_2 &=& \sqrt{-2 ln u_1} sin(2 π u_2)
\end{eqnarray*}
Below is a Fortran implementation returning a Gaussian-distributed
n-dimensional vector
Fortran
subroutine random_gauss(z,n)
implicit none
integer, intent(in) :: n
double precision, intent(out) :: z(n)
double precision :: u(n+1)
double precision, parameter :: two_pi = 2.d0*dacos(-1.d0)
integer :: i
call random_number(u)
if (iand(n,1) == 0) then
! n is even
do i=1,n,2
z(i) = dsqrt(-2.d0*dlog(u(i)))
z(i+1) = z(i) * dsin( two_pi*u(i+1) )
z(i) = z(i) * dcos( two_pi*u(i+1) )
end do
else
! n is odd
do i=1,n-1,2
z(i) = dsqrt(-2.d0*dlog(u(i)))
z(i+1) = z(i) * dsin( two_pi*u(i+1) )
z(i) = z(i) * dcos( two_pi*u(i+1) )
end do
z(n) = dsqrt(-2.d0*dlog(u(n)))
z(n) = z(n) * dcos( two_pi*u(n+1) )
end if
end subroutine random_gauss
In Python, you can use the ~random.normal~ function of Numpy.
Python
def drift(a,r):
# TODO
Fortran
subroutine drift(a,r,b)
implicit none
double precision, intent(in) :: a, r(3)
double precision, intent(out) :: b(3)
! TODO
end subroutine drift
Python
def drift(a,r):
ar_inv = -a/np.sqrt(np.dot(r,r))
return r * ar_inv
Fortran
subroutine drift(a,r,b)
implicit none
double precision, intent(in) :: a, r(3)
double precision, intent(out) :: b(3)
double precision :: ar_inv
ar_inv = -a / dsqrt(r(1)*r(1) + r(2)*r(2) + r(3)*r(3))
b(:) = r(:) * ar_inv
end subroutine drift
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a,nmax,dt):
# TODO
# Run simulation
a = 1.2
nmax = 100000
dt = # TODO
X0 = [ MonteCarlo(a,nmax,dt) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
Fortran
subroutine variational_montecarlo(a,dt,nmax,energy,accep)
implicit none
double precision, intent(in) :: a, dt
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy, accep
integer*8 :: istep
integer*8 :: n_accep
double precision :: sq_dt, chi(3)
double precision :: psi_old, psi_new
double precision :: r_old(3), r_new(3)
double precision :: d_old(3), d_new(3)
double precision, external :: e_loc, psi
! TODO
end subroutine variational_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = ! TODO
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), accep(nruns)
double precision :: ave, err
do irun=1,nruns
call variational_montecarlo(a,dt,nmax,X(irun),accep(irun))
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(accep,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
gfortran hydrogen.f90 qmc_stats.f90 vmc_metropolis.f90 -o vmc_metropolis
./vmc_metropolis
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a,nmax,dt):
sq_dt = np.sqrt(dt)
energy = 0.
N_accep = 0
r_old = np.random.normal(loc=0., scale=1.0, size=(3))
d_old = drift(a,r_old)
d2_old = np.dot(d_old,d_old)
psi_old = psi(a,r_old)
for istep in range(nmax):
chi = np.random.normal(loc=0., scale=1.0, size=(3))
energy += e_loc(a,r_old)
r_new = r_old + dt * d_old + sq_dt * chi
d_new = drift(a,r_new)
d2_new = np.dot(d_new,d_new)
psi_new = psi(a,r_new)
# Metropolis
prod = np.dot((d_new + d_old), (r_new - r_old))
argexpo = 0.5 * (d2_new - d2_old)*dt + prod
q = psi_new / psi_old
q = np.exp(-argexpo) * q*q
if np.random.uniform() <= q:
N_accep += 1
r_old = r_new
d_old = d_new
d2_old = d2_new
psi_old = psi_new
return energy/nmax, N_accep/nmax
# Run simulation
a = 1.2
nmax = 100000
dt = 1.0
X0 = [ MonteCarlo(a,nmax,dt) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
E = -0.48034171558629885 +/- 0.0005286038561061781 A = 0.6210380000000001 +/- 0.0005457375900937905
Fortran
subroutine variational_montecarlo(a,dt,nmax,energy,accep)
implicit none
double precision, intent(in) :: a, dt
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy, accep
integer*8 :: istep
integer*8 :: n_accep
double precision :: sq_dt, chi(3), d2_old, prod, u
double precision :: psi_old, psi_new, d2_new, argexpo, q
double precision :: r_old(3), r_new(3)
double precision :: d_old(3), d_new(3)
double precision, external :: e_loc, psi
sq_dt = dsqrt(dt)
! Initialization
energy = 0.d0
n_accep = 0_8
call random_gauss(r_old,3)
call drift(a,r_old,d_old)
d2_old = d_old(1)*d_old(1) + &
d_old(2)*d_old(2) + &
d_old(3)*d_old(3)
psi_old = psi(a,r_old)
do istep = 1,nmax
energy = energy + e_loc(a,r_old)
call random_gauss(chi,3)
r_new(:) = r_old(:) + dt*d_old(:) + chi(:)*sq_dt
call drift(a,r_new,d_new)
d2_new = d_new(1)*d_new(1) + &
d_new(2)*d_new(2) + &
d_new(3)*d_new(3)
psi_new = psi(a,r_new)
! Metropolis
prod = (d_new(1) + d_old(1))*(r_new(1) - r_old(1)) + &
(d_new(2) + d_old(2))*(r_new(2) - r_old(2)) + &
(d_new(3) + d_old(3))*(r_new(3) - r_old(3))
argexpo = 0.5d0 * (d2_new - d2_old)*dt + prod
q = psi_new / psi_old
q = dexp(-argexpo) * q*q
call random_number(u)
if (u <= q) then
n_accep = n_accep + 1_8
r_old(:) = r_new(:)
d_old(:) = d_new(:)
d2_old = d2_new
psi_old = psi_new
end if
end do
energy = energy / dble(nmax)
accep = dble(n_accep) / dble(nmax)
end subroutine variational_montecarlo
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = 1.0d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), accep(nruns)
double precision :: ave, err
do irun=1,nruns
call variational_montecarlo(a,dt,nmax,X(irun),accep(irun))
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(accep,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
E = -0.47940635575542706 +/- 5.5613594433433764E-004 A = 0.62037333333333333 +/- 4.8970160591451110E-004
As we have seen, Variational Monte Carlo is a powerful method to compute integrals in large dimensions. It is often used in cases where the expression of the wave function is such that the integrals can’t be evaluated (multi-centered Slater-type orbitals, correlation factors, etc).
Diffusion Monte Carlo is different. It goes beyond the computation of the integrals associated with an input wave function, and aims at finding a near-exact numerical solution to the Schrödinger equation.
Consider the time-dependent Schrödinger equation:
\[ i\frac{∂ Ψ(\mathbf{r},t)}{∂ t} = (\hat{H} -E\rm ref) Ψ(\mathbf{r},t)\,. \]
where we introduced a shift in the energy, $E\rm ref$, for reasons which will become apparent below.
We can expand a given starting wave function,
\[ Ψ(\mathbf{r},0) = ∑_k a_k\, Φ_k(\mathbf{r}). \]
The solution of the Schrödinger equation at time
\[ Ψ(\mathbf{r},t) = ∑_k a_k exp \left( -i\, (E_k-E\rm ref)\, t \right) Φ_k(\mathbf{r}). \]
Now, if we replace the time variable
\[ -\frac{∂ ψ(\mathbf{r}, τ)}{∂ τ} = (\hat{H} -E\rm ref) ψ(\mathbf{r}, τ) \]
where
\begin{eqnarray*}
ψ(\mathbf{r},τ) &=& ∑_k a_k exp( -(E_k-E\rm ref)\, τ) Φ_k(\mathbf{r})
&=& exp(-(E_0-E\rm ref)\, τ)∑_k a_k exp( -(E_k-E_0)\, τ) Φ_k(\mathbf{r})\,.
\end{eqnarray*}
For large positive values of
The diffusion equation of particles is given by
\[ \frac{∂ ψ(\mathbf{r},t)}{∂ t} = D\, Δ ψ(\mathbf{r},t) \]
where
\[ \frac{∂ ψ(\mathbf{r}, τ)}{∂ τ} = \left(\frac{1}{2}Δ - [V(\mathbf{r}) -E\rm ref]\right) ψ(\mathbf{r}, τ)\,, \]
it can be identified as the combination of:
- a diffusion equation (Laplacian)
- an equation whose solution is an exponential (potential)
The diffusion equation can be simulated by a Brownian motion:
\[ \mathbf{r}n+1 = \mathbf{r}n + \sqrt{δ t}\, χ \]
where
\[ ∏_i exp \left( - (V(\mathbf{r}_i) - E\text{ref}) δ t \right). \]
We note that the ground-state wave function of a Fermionic system is
antisymmetric and changes sign. Therefore, its interpretation as a probability
distribution is somewhat problematic. In fact, mathematically, since
the Bosonic ground state is lower in energy than the Fermionic one, for
large
For the systems you will study, this is not an issue:
- Hydrogen atom: You only have one electron!
- Two-electron system (
$H_2$ or He): The ground-wave function is antisymmetric in the spin variables but symmetric in the space ones.
Therefore, in both cases, you are dealing with a “Bosonic” ground state.
In a molecular system, the potential is far from being constant
and, in fact, diverges at the inter-particle coalescence points. Hence,
it results in very large fluctuations of the erm weight associated with
the potental, making the calculations impossible in practice.
Fortunately, if we multiply the Schrödinger equation by a chosen
trial wave function
\[ -\frac{∂ ψ(\mathbf{r},τ)}{∂ τ} Ψ_T(\mathbf{r}) = \left[ -\frac{1}{2} Δ ψ(\mathbf{r},τ) + V(\mathbf{r}) ψ(\mathbf{r},τ) \right] Ψ_T(\mathbf{r}) \]
Defining
\[ -\frac{∂ Π(\mathbf{r},τ)}{∂ τ} = -\frac{1}{2} Δ Π(\mathbf{r},τ) + ∇ \left[ Π(\mathbf{r},τ) \frac{∇ Ψ_T(\mathbf{r})}{Ψ_T(\mathbf{r})} \right] + (E_L(\mathbf{r})-E\rm ref)Π(\mathbf{r},τ) \]
The new “kinetic energy” can be simulated by the drift-diffusion
scheme presented in the previous section (VMC).
The new “potential” is the local energy, which has smaller fluctuations
when
This equation generates the N-electron density
To this aim, we use the mixed estimator of the energy:
\begin{eqnarray*}
E(τ) &=& \frac{\langle ψ(τ) | \hat{H} | Ψ_T \rangle}{\langle ψ(τ) | Ψ_T \rangle}
&=& \frac{∫ ψ(\mathbf{r},τ) \hat{H} Ψ_T(\mathbf{r}) d\mathbf{r}}
{∫ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) d\mathbf{r}} \
&=& \frac{∫ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) E_L(\mathbf{r}) d\mathbf{r}}
{∫ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) d\mathbf{r}} \,.
\end{eqnarray*}
For large
\[ Π(\mathbf{r},τ) =ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) → Φ_0(\mathbf{r}) Ψ_T(\mathbf{r})\,, \]
and, using that
\[ E(τ) = \frac{\langle ψ_τ | \hat{H} | Ψ_T \rangle} {\langle ψ_τ | Ψ_T \rangle} = \frac{\langle Ψ_T | \hat{H} | ψ_τ \rangle} {\langle Ψ_T | ψ_τ \rangle} → E_0 \frac{\langle Ψ_T | Φ_0 \rangle} {\langle Ψ_T | Φ_0 \rangle} = E_0 \]
Therefore, we can compute the energy within DMC by generating the
density
\[ -\frac{∂ ψ(\mathbf{r},τ)}{∂ τ} Ψ_T(\mathbf{r}) = \left[ -\frac{1}{2} Δ ψ(\mathbf{r},τ) + V(\mathbf{r}) ψ(\mathbf{r},τ) \right] Ψ_T(\mathbf{r}) \]
\[ -\frac{∂ \big[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big]}{∂ τ} = -\frac{1}{2} \Big( Δ \big[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big] - ψ(\mathbf{r},τ) Δ Ψ_T(\mathbf{r}) - 2 ∇ ψ(\mathbf{r},τ) ∇ Ψ_T(\mathbf{r}) \Big) + V(\mathbf{r}) ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \]
\[ -\frac{∂ \big[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big]}{∂ τ} = -\frac{1}{2} Δ \big[ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big] + \frac{1}{2} ψ(\mathbf{r},τ) Δ Ψ_T(\mathbf{r}) + Ψ_T(\mathbf{r})∇ ψ(\mathbf{r},τ) \frac{∇ Ψ_T(\mathbf{r})}{Ψ_T(\mathbf{r})} + V(\mathbf{r}) ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \]
\[ -\frac{∂ \big[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big]}{∂ τ} = -\frac{1}{2} Δ \big[ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big] + ψ(\mathbf{r},τ) Δ Ψ_T(\mathbf{r}) + Ψ_T(\mathbf{r})∇ ψ(\mathbf{r},τ) \frac{∇ Ψ_T(\mathbf{r})}{Ψ_T(\mathbf{r})} + E_L(\mathbf{r}) ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \] \[ -\frac{∂ \big[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big]}{∂ τ} = -\frac{1}{2} Δ \big[ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \big] + ∇ \left[ ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \frac{∇ Ψ_T(\mathbf{r})}{Ψ_T(\mathbf{r})} \right] + E_L(\mathbf{r}) ψ(\mathbf{r},τ) Ψ_T(\mathbf{r}) \]
Defining
\[ -\frac{∂ Π(\mathbf{r},τ)}{∂ τ} = -\frac{1}{2} Δ Π(\mathbf{r},τ) + ∇ \left[ Π(\mathbf{r},τ) \frac{∇ Ψ_T(\mathbf{r})}{Ψ_T(\mathbf{r})} \right] + E_L(\mathbf{r}) Π(\mathbf{r},τ) \]
Instead of having a variable number of particles to simulate the branching process as in the Diffusion Monte Carlo (DMC) algorithm, we use variant called pure Diffusion Monte Carlo (PDMC) where the potential term is considered as a cumulative product of weights:
\begin{eqnarray*} W(\mathbf{r}_n, τ) = ∏i=1n exp \left( -δ t\, (E_L(\mathbf{r}_i) - E\text{ref}) \right) = ∏i=1n w(\mathbf{r}_i) \end{eqnarray*}
where
The PDMC algorithm is less stable than the DMC algorithm: it
requires to have a value of
- Start with
$W(\mathbf{r}_0)=1, τ_0 = 0$ - Evaluate the local energy at $\mathbf{r}n$
- Compute the contribution to the weight
$w(\mathbf{r}_n) = exp(-δ t(E_L(\mathbf{r}_n)-E_\text{ref}))$ - Update $W(\mathbf{r}n) = W(\mathbf{r}n-1) × w(\mathbf{r}_n)$
- Accumulate the weighted energy
$W(\mathbf{r}_n) × E_L(\mathbf{r}_n)$ , and the weight$W(\mathbf{r}_n)$ for the normalization - Update $τ_n = τn-1 + δ t$
- If $τn > τ_\text{max}$ (
$τ_\text{max}$ is an input parameter), the long projection time has been reached and we can start an new trajectory from the current position: reset$W(r_n) = 1$ and$τ_n = 0$ - Compute a new position
$\mathbf{r’} = \mathbf{r}_n + δ t\, \frac{∇ Ψ(\mathbf{r})}{Ψ(\mathbf{r})} + χ$ - Evaluate
$Ψ(\mathbf{r}’)$ and$\frac{∇ Ψ(\mathbf{r’})}{Ψ(\mathbf{r’})}$ at the new position - Compute the ratio $A = \frac{T(\mathbf{r}’ → \mathbf{r}n) P(\mathbf{r}’)}{T(\mathbf{r}n → \mathbf{r}’) P(\mathbf{r}n)}$
- Draw a uniform random number
$v ∈ [0,1]$ - if
$v ≤ A$ , accept the move : set $\mathbf{r}n+1 = \mathbf{r’}$ - else, reject the move : set $\mathbf{r}n+1 = \mathbf{r}_n$
Some comments are needed:
- You estimate the energy as
\begin{eqnarray*} E = \frac{∑k=1N_{\rm MC} E_L(\mathbf{r}_k) W(\mathbf{r}_k, kδ t)}{∑k=1N_{\rm MC} W(\mathbf{r}_k, kδ t)} \end{eqnarray*}
- The result will be affected by a time-step error
(the finite size of
$δ t$ ) due to the discretization of the integral, and one has in principle to extrapolate to the limit$δ t → 0$ . This amounts to fitting the energy computed for multiple values of$δ t$ .Here, you will be using a small enough time-step and you should not worry about the extrapolation.
- The accept/reject step (steps 9-12 in the algorithm) is in principle not needed for the correctness of the DMC algorithm. However, its use reduces significantly the time-step error.
Python
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a, nmax, dt, Eref):
# TODO
# Run simulation
a = 1.2
nmax = 100000
dt = 0.05
tau = 100.
E_ref = -0.5
X0 = [ MonteCarlo(a, nmax, dt, E_ref) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
Fortran
subroutine pdmc(a, dt, nmax, energy, accep, tau, E_ref)
implicit none
double precision, intent(in) :: a, dt, tau
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy, accep
double precision, intent(in) :: E_ref
integer*8 :: istep
integer*8 :: n_accep
double precision :: sq_dt, chi(3)
double precision :: psi_old, psi_new
double precision :: r_old(3), r_new(3)
double precision :: d_old(3), d_new(3)
double precision, external :: e_loc, psi
! TODO
end subroutine pdmc
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = 0.05d0
double precision, parameter :: E_ref = -0.5d0
double precision, parameter :: tau = 100.d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), accep(nruns)
double precision :: ave, err
do irun=1,nruns
call pdmc(a, dt, nmax, X(irun), accep(irun), tau, E_ref)
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(accep,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
gfortran hydrogen.f90 qmc_stats.f90 pdmc.f90 -o pdmc
./pdmc
Python
#!/usr/bin/env python3
from hydrogen import *
from qmc_stats import *
def MonteCarlo(a, nmax, dt, tau, Eref):
sq_dt = np.sqrt(dt)
energy = 0.
N_accep = 0
normalization = 0.
w = 1.
tau_current = 0.
r_old = np.random.normal(loc=0., scale=1.0, size=(3))
d_old = drift(a,r_old)
d2_old = np.dot(d_old,d_old)
psi_old = psi(a,r_old)
for istep in range(nmax):
el = e_loc(a,r_old)
w *= np.exp(-dt*(el - Eref))
normalization += w
energy += w * el
tau_current += dt
# Reset when tau is reached
if tau_current > tau:
w = 1.
tau_current = 0.
chi = np.random.normal(loc=0., scale=1.0, size=(3))
r_new = r_old + dt * d_old + sq_dt * chi
d_new = drift(a,r_new)
d2_new = np.dot(d_new,d_new)
psi_new = psi(a,r_new)
# Metropolis
prod = np.dot((d_new + d_old), (r_new - r_old))
argexpo = 0.5 * (d2_new - d2_old)*dt + prod
q = psi_new / psi_old
q = np.exp(-argexpo) * q*q
if np.random.uniform() <= q:
N_accep += 1
r_old = r_new
d_old = d_new
d2_old = d2_new
psi_old = psi_new
return energy/normalization, N_accep/nmax
# Run simulation
a = 1.2
nmax = 100000
dt = 0.05
tau = 100.
E_ref = -0.5
X0 = [ MonteCarlo(a, nmax, dt, tau, E_ref) for i in range(30)]
# Energy
X = [ x for (x, _) in X0 ]
E, deltaE = ave_error(X)
print(f"E = {E} +/- {deltaE}")
# Acceptance rate
X = [ x for (_, x) in X0 ]
A, deltaA = ave_error(X)
print(f"A = {A} +/- {deltaA}")
Fortran
subroutine pdmc(a, dt, nmax, energy, accep, tau, E_ref)
implicit none
double precision, intent(in) :: a, dt, tau
integer*8 , intent(in) :: nmax
double precision, intent(out) :: energy, accep
double precision, intent(in) :: E_ref
integer*8 :: istep
integer*8 :: n_accep
double precision :: sq_dt, chi(3), d2_old, prod, u
double precision :: psi_old, psi_new, d2_new, argexpo, q
double precision :: r_old(3), r_new(3)
double precision :: d_old(3), d_new(3)
double precision :: e, w, norm, tau_current
double precision, external :: e_loc, psi
sq_dt = dsqrt(dt)
! Initialization
energy = 0.d0
n_accep = 0_8
norm = 0.d0
w = 1.d0
tau_current = 0.d0
call random_gauss(r_old,3)
call drift(a,r_old,d_old)
d2_old = d_old(1)*d_old(1) + &
d_old(2)*d_old(2) + &
d_old(3)*d_old(3)
psi_old = psi(a,r_old)
do istep = 1,nmax
e = e_loc(a,r_old)
w = w * dexp(-dt*(e - E_ref))
norm = norm + w
energy = energy + w*e
tau_current = tau_current + dt
! Reset when tau is reached
if (tau_current > tau) then
w = 1.d0
tau_current = 0.d0
endif
call random_gauss(chi,3)
r_new(:) = r_old(:) + dt*d_old(:) + chi(:)*sq_dt
call drift(a,r_new,d_new)
d2_new = d_new(1)*d_new(1) + &
d_new(2)*d_new(2) + &
d_new(3)*d_new(3)
psi_new = psi(a,r_new)
! Metropolis
prod = (d_new(1) + d_old(1))*(r_new(1) - r_old(1)) + &
(d_new(2) + d_old(2))*(r_new(2) - r_old(2)) + &
(d_new(3) + d_old(3))*(r_new(3) - r_old(3))
argexpo = 0.5d0 * (d2_new - d2_old)*dt + prod
q = psi_new / psi_old
q = dexp(-argexpo) * q*q
call random_number(u)
if (u <= q) then
n_accep = n_accep + 1_8
r_old(:) = r_new(:)
d_old(:) = d_new(:)
d2_old = d2_new
psi_old = psi_new
end if
end do
energy = energy / norm
accep = dble(n_accep) / dble(nmax)
end subroutine pdmc
program qmc
implicit none
double precision, parameter :: a = 1.2d0
double precision, parameter :: dt = 0.05d0
double precision, parameter :: E_ref = -0.5d0
double precision, parameter :: tau = 100.d0
integer*8 , parameter :: nmax = 100000
integer , parameter :: nruns = 30
integer :: irun
double precision :: X(nruns), accep(nruns)
double precision :: ave, err
do irun=1,nruns
call pdmc(a, dt, nmax, X(irun), accep(irun), tau, E_ref)
enddo
call ave_error(X,nruns,ave,err)
print *, 'E = ', ave, '+/-', err
call ave_error(accep,nruns,ave,err)
print *, 'A = ', ave, '+/-', err
end program qmc
E = -0.49963953547336709 +/- 6.8755513992017491E-004 A = 0.98963533333333342 +/- 6.3052128284666221E-005
Change your PDMC code for one of the following:
- the Helium atom, or
- the H_2 molecule at
$R$ =1.401 bohr.
And compute the ground state energy.
<2021-02-04 Thu 09:00-10:30> | Lecture |
---|---|
<2021-02-04 Thu 11:00-11:20> | 2.1 |
<2021-02-04 Thu 11:20-11:40> | 2.2 |
<2021-02-04 Thu 11:40-12:15> | 2.3 |
<2021-02-04 Thu 12:15-12:30> | 2.4 |
<2021-02-04 Thu 14:00-14:10> | 3.1 |
<2021-02-04 Thu 14:10-14:30> | 3.2 |
<2021-02-04 Thu 14:30-15:30> | 3.3 |
<2021-02-04 Thu 15:30-16:30> | 3.4 |
<2021-02-04 Thu 16:30-18:30> | 4.5 |
TREX : Targeting Real Chemical Accuracy at the Exascale project has received funding from the European Union’s Horizon 2020 - Research and Innovation program - under grant agreement no. 952165. The content of this document does not represent the opinion of the European Union, and the European Union is not responsible for any use that might be made of such content.