Installing Simulation Packages

The installing procedures of some useful open source simulation packages will be illustrated on this web page.

I. Install Quantum Espresso


Quantum Espresso (QE) is an integrated suite of open-source computer codes for ab initio electronic-structure calculations and materials modeling. It is based on density-functional theory, plane waves, and pseudopotentials. Typical jobs can be done with QE are: 1) Ground-State Calculations; 2) Structural Optimization; 3) Transition States and Minimum Energy Paths; 4) Ab-Initio Molecular Dynamics; 5) Response Properties; 6) Spectroscopic Properties; and 7) Quantum Transport.

$ : Ubuntu shell response

To install the OpenMPI version of Quantum Espresso on Ubuntu 16.04, follow the following steps:
1) Download espresso-5.4.0.tar.gz from QE website and untar the file
 $ tar zxf espresso-5.4.0.tar.gz
2) $ cd espresso-5.4.0
 $ make distclean
3) $ export Lib="/usr/local/lib"
 $ export OBLAS="/usr/local/lib/OpenBLAS"
 $ export omp="/usr/local/openmpi"
4) $ ./configure --prefix=/opt/QE540 --enable-parallel=yes CC=$omp/bin/mpicc MPIF90=$omp/bin/mpif90 --with-scalapack=yes --with-elpa=yes LDFLAGS="-L$Lib -L$OBLAS/lib -L$Lib/elpa/lib -L$omp/lib" LIBS="-lmpi -lscalapack -lopenblas -lelpa"
5) $ make -j $n all
6) $ sudo make install

To run the code, type:    $ mpirun /opt/QE540/bin/pw.x < mater.in > mater.out

Tutorial Example of QE: For an introduction to run quantum espresso, here is a file pdf to give you a step-by-step introduction. However, the most efficient way to run QE is to take advantages of shell script. Script 1 pdf is a file to test the SCF convergence as a function of cutoff energy (Ecut) of plane waves, and script 2 pdf is a file to disclose the equation of state (EOS) of silicon crystal. These two files can be modified to fit to calculations for different applications.

Typical output of Quantum Espresso executed on multicore CPU nodes with OpenMPI:
A) Run the run_EosTest_si.sh script pdf on dell-m4800 node
  (4 mpi processes, 2 threads-per mpi process, effective 8 cores)-->
  This script took 12 seconds to complete.
  Parallel version (MPI & OpenMP), running on  8 processor cores
  Number of MPI processes:  4
  Threads/MPI process:  2

  PWSCF : 0.45s CPU 0.25s WALL
  This run was terminated on: 10:34:27 16Sep2015

B) Run the run_EosTest_si.sh script on dell-t7500 node
  (6 mpi processes, 1 threads-per mpi process, effective 6 cores)-->
  This script took 17 seconds to complete.
  Parallel version (MPI & OpenMP), running on  6 processor cores
  Number of MPI processes:    6
  Threads/MPI process:    1

  PWSCF : 0.32s CPU 0.34s WALL
  This run was terminated on: 10:35:11 16Sep2015

c) Run the run_EosTest_si.sh script on both nodes
  (5 mpi processes, 2 threads-per mpi process, effective 10 cores)-->
  This script took 48 seconds to complete.
  Parallel version (MPI & OpenMP), running on  10 processor cores
  Number of MPI processes:    5
  Threads/MPI process:  2

  PWSCF : 5.14s CPU 5.17s WALL
  This run was terminated on: 10:38:17 16Sep2015

II. Install CP2K


CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems. It provides a general framework for different methods such as e.g., density functional theory using a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials.

1) $ git clone https://github.com/cp2k/cp2k
2) $ cd ./makefiles
3) $ make -j $n ARCH=Linux-x86-64-gfortran VERSION=psmp
 To remove only objects and mod files but keep exe for the Linux-x86-64-gfortran.psmp (ARCH.VERSION) use:
 $ make ARCH=Linux-x86-64-gfortran VERSION=psmp clean
  or
 $ make -j $n ARCH=mpi-cuda VERSION=psmp
 $ make ARCH=mpi-cuda VERSION=psmp clean

To run the executable, type
 $ cwd=~/WS_cp2k
 $ export exedir="/opt/cp2k/exe/mpi-cuda"
 $ mpirun -np 4 --bind-to core ${exedir}/cp2k.psmp < ${cwd}/neb1.inp > ${cwd}/neb1.out

Tutorial Example of CP2K: For an introduction to run cp2k, here is a file pdf to give you a step-by-step introduction to do single-energy minimization on a hybrid multicore cpu and gpu parallel computation platform. The second file pdf show you how to do a basic ab initio analysis of the structure and dynamics of a liquid (32 water molecules), and after the cp2k run, using VMD to visualize the trajectory data and simulate IR spectrum. These two files can be modified to fit to calculations for different applications.

Typical output of cp2k run for 32 H2O molecules on hybrid multicore CPU and nVidia GPU nodes with OpenMPI:
A) Use this script file pdf on dell-m4800 node
 cp2k run took 8 seconds to complete 500 MD steps (2.5 fs/step)
 **** **** ****** ** PROGRAM STARTED AT 2015-09-16 10:26:04.658
 ***** ** *** *** ** PROGRAM STARTED ON dell-m4800
 GLOBAL| Total number of message passing processes 4
 GLOBAL| Number of threads for this process  2
 GLOBAL| This output is from process    0

B) Use this script file pdf on dell-t7500 node
 cp2k run took 14 seconds to complete 500 MD steps (2.5 fs/step)
 **** **** ****** ** PROGRAM STARTED AT 2015-09-16 10:12:11.998
 ***** ** *** *** ** PROGRAM STARTED ON dell-t7500
 GLOBAL| Total number of message passing processes   6
 GLOBAL| Number of threads for this process    1
 GLOBAL| This output is from process    0

C) Use this script file pdf on both nodes

III. Install OpenMM MD


OpenMM is a high performance toolkit for molecular simulation. Use it as a library, or as an application. Install OpenMM from source on Ubuntu 16.04 as follows:

1) Download the source code to a folder:
2) $ git clone https://github.com/pandegroup/openmm
3) $ cd openmm
3) $ mkdir build
4) $ cd build

Execute ccmake and hit "c" to configure and "g" to generate the cmake configure file.
OpenMM_BUILD_SHARED_LIB ON   OpenMM_BUILD_STATIC_LIB OFF
5) $ ccmake -i /home/jyhuang/Downloads/openmm
  Execute ccmake and hit "c" to configure and "g" to generate the cmake configure file. Select the options :
  OpenMM_PME_Plugin  OFF
  OpenMM_BUILD_SHARED_LIB  ON
  OpenMM_BUILD_STATIC_LIB  OFF

6) cmake ..

Make the installment
7) $ make -j $n
8) $ sudo make install
9) $ sudo make PythonInstall

Test the installed OpenMM package with
 $ make test

Tutorial Example of OpenMM: OpenMM can be easily executed in the environment of IPython Notebook. A website has been established to help users generating their ipynb script. The resulting MD trajectories can be written to a file in a variety of data format. Therefore, the simulation results (trajectories) can be analyzed with MDTraj, VMD, and Bio3D. Furthermore, Markov State Model can be implemented to retrieve microstates/macrostates and the corresponding transition pathways from MD trajectories. MSMBuilder and PyEMMA are two useful open source packages, which implement Markov Model Algorithms in IPython platform.

IV. Install NAMD


NAMD is a parallel molecular dynamics code designed for high-performance MD simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis.

1) Install NAMD from binaries. A NAMD binary distribution need only be untarred and can be run directly in the resulting directory.
 $ tar xzf NAMD_CVS-2016-02-13_Linux-x86_64-multicore-CUDA.tar.gz
2) $ sudo mv NAMD_CVS-2016-02-13_Linux-x86_64-multicore-CUDA /opt/NAMD

Use VMD to prepare the needed coordinate file (xxx.pdb) and structural file (xxx.psf). Prepare xxx.conf and run NAMD with the script:
 $ /opt/NAMD/namd2 xxx.conf > xxx.log

Tutorial Example of NAMD: For an introduction to run NAMD, here is a file pdf to give you a step-by-step introduction to perform a MD simulation on Adk protein with hybrid multicore cpu and gpu parallel computation platform. The step-by-step tutorial example integrates seamlessly the use of VMD and NAMD. This tutorial also describes how to use bio3D to retrieve useful information from trajectories.

V. Install Plumed


Plumed is an open source library for free energy calculations in molecular systems and works together with some of the most popular molecular dynamics engines for steered MD or metadynamics study.

The following libraries are needed to install Plumed: 1) Optimized BLAS and LAPACK libraries /usr/local/lib/OpenBLAS/lib; 2) MPI library to run parallel simulations: /usr/local/openmpi/lib 3) matheval library (www.gnu.org/software/libmatheval) to use algebraic collective variables (can be installed on Ubuntu via: $ sudo apt-get install libmatheval-dev).

1) Download the source code:
 $ cd ~/Downloads
 $ git clone https://github.com/plumed/plumed2
 $ cd plumed2
 $ bash release.sh
 choose the most recent 2.2b version

2) Compile PLUMED2.2 with MPI support and install it to allow you configure gromacs-5.1.3 with MPI. First, configure for your system:
 $ make distclean
 $ export CC="/usr/local/openmpi/bin/mpicc"
 $ export CXX="/usr/local/openmpi/bin/mpic++"
 $ export FC="/usr/local/openmpi/bin/mpif90"

3) $ ./configure CC CXX FC LDFLAGS="-L/usr/local/openmpi/lib -L/usr/lib/gcc/x86_64-linux-gnu/5 -L/usr/local/lib/OpenBLAS/lib" CPPFLAGS="-I/usr/local/openmpi/include -I/usr/local/lib/OpenBLAS/include" LIBS="-lmpi_cxx -lstdc++ -lopenblas"

4) If necessary, edit Makefile.conf.in to configure your environment and then
 $ source sourceme.sh
5) Compile plumed
 $ make -j $n
 $ sudo make install

 The installed plumed2 will be installed in the path /usr/local:
 binary codes in /usr/local/bin, libraries in /usr/local/lib, and include files in /usr/local/include

V. Install Gromacs


GROMACS is a versatile package to perform molecular dynamics (simulate the Newtonian equations of motion for systems with hundreds to millions of particles). It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. Install Plumed Patched MPI+OpenMPI+GPU Gromacs as follows:

1) Download and prepare the source code
 $ cd ~/Downloads
 $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.2.tar.gz
 $ tar zxf gromacs-5.1.2.tar.gz
2) $ cd gromacs-5.1.2
3) Prepare the Plumed version of Gromacs 5.1.2
 $ plumed patch -p --shared

 Select 6) gromacs-5.1.2
 --> MD engine: gromacs-5.1.2
 PLUMED location: /usr/local/lib/plumed
 diff file: /usr/local/lib/plumed/patches/gromacs-5.1.2.diff
 sourcing config file: /usr/local/lib/plumed/patches/gromacs-5.1.2.config
 Linking Plumed.h and Plumed.inc (shared mode)
 Patching with on-the-fly diff from stored originals
 patching file ./src/gromacs/CMakeLists.txt
 patching file ./src/gromacs/mdlib/force.cpp
 patching file ./src/programs/mdrun/mdrun.cpp
 patching file ./src/programs/mdrun/repl_ex.cpp
 patching file ./src/programs/mdrun/runner.cpp

 You are patching in shared mode. Be warned that when you will run MD you will use the PLUMED version pointed at by the PLUMED_KERNEL environment variable.

 If compile Gromacs with nvcc using gcc-5.3:
4) $ sudo nano /usr/local/cuda-7.5/include/host_config.h
  comment out
  #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9)
  //#error -- unsupported GNU version! gcc versions later than 4.9 are not supported!
  #endif

5) $ sudo nano /usr/lib/gcc/x86_64-linux-gnu/5/include/x86intrin.h
  comment out
  //#include
6) Start to install Gromacs in $HOME/Downloads:
 $ cd gromacs-5.1.2
 $ mkdir build
 $ cd build
7) $ cd build
8) $ export PLUMED_KERNEL=/usr/local/lib/libplumedKernel.so
9) Prepare cmake configuration file by executing ccmake. Hit "c" to configure and "g" to generate the cmake configure file:
 $ ccmake -i /home/jyhuang/Downloads/gromacs-5.1.2
  Select the options :
  GMX_SIMD sse2 (or AVX2_256 depending the machine)
  GMX_MPI ON
  GMX_OPENMP ON
  GMX_GPU ON
  GMX_USE_OPENCL OFF
  GMX_THREAD_MPI OFF
  BUILD_SHARED_LIBS OFF
  GMX_EXTERNAL_BLAS  ON
  BLAS_blas_LIBRARY  /usr/local/lib/OpenBLAS/lib/libopenblas.so
  GMX_EXTERNAL_LAPACK  ON
  LAPACK_lapack_LIBRARY  /usr/local/lib/OpenBLAS/lib/libopenblas.so
  CMAKE_CXX_COMPILER:  /usr/local/openmpi/bin/mpicxx
  CMAKE_C_COMPILER:  /usr/local/openmpi/bin/mpicc
  GMX_BUILD_OWN_FFTW OFF
  FFTWF_INCLUDE_DIR  /usr/local/fftw3/include
  FFTWF_LIBRARY  /usr/local/fftw3/lib/libfftw3f.so

10) Add
  set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_FORCE_INLINES")
  directly behind "project(Gromacs)" in the file CMakeLists.txt
11) $ cmake ..
12) Install the package:
 $ make -j $n
  --> [100%] Building CXX object src/programs/CMakeFiles/gmx.dir/gmx.cpp.o
  [100%] Building CXX object share/template/CMakeFiles/template.dir/template.cpp.o
  [100%] Linking CXX executable ../../bin/gmx_mpi
  [100%] Linking CXX executable ../../bin/template
  [100%] Built target gmx
  [100%] Built target template

13) Check the result:
 $ make check
  --> 100% tests passed, 0 tests failed out of 20
  Label Time Summary:
  GTest = 0.92 sec (17 tests)
  IntegrationTest = 1.87 sec (2 tests)
  MpiIntegrationTest = 0.73 sec (1 test)
  UnitTest = 0.92 sec (17 tests)
  Total Test time (real) = 3.52 sec
  [100%] Built target run-ctest
  Scanning dependencies of target regressiontests-notice
  [100%] Regression tests not available
  NOTE: Regression tests have not been run. If you want to run them from the build system, get the correct version of the regression tests package and set REGRESSIONTEST_PATH in CMake to point to it, or set REGRESSIONTEST_DOWNLOAD=ON.
  [100%] Built target regressiontests-notice   Scanning dependencies of target check
  [100%] Built target check

14)$ sudo make install
    $ make clean
15)$ cd /opt/Gromacs_gpu/bin | source GMXRC

Tutorial Examples of Gromacs: For an introduction to run Gromacs, here we posted a file pdf to give you a step-by-step introduction to conduct Gromacs MD simulation of a protein with hybrid multicore cpu and gpu parallel computation platform. This tutorial also describes how to analyze trajectories and make a movie of the MD result.