Difference between revisions of "LAMMPS"
From Montana Tech High Performance Computing
(→Loading LAMMPS) |
|||
Line 16: | Line 16: | ||
To load LAMMPS module: | To load LAMMPS module: | ||
− | : ''module load lammps openmpi'' | + | : ''module load lammps openmpi cuda'' |
− | The above command will load the default version of LAMMPS and | + | The above command will load the default version of LAMMPS as well as OpenMPI and CUDA. The LAMMPS is compiled with OpenMPI 1.6.4 and CUDA. You may experience poor performance if other version of OpenMPI is loaded. |
To check the loaded modules: | To check the loaded modules: |
Latest revision as of 08:55, 25 September 2018
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
LAMMPS is a popular molecular dynamics program from Sandia National Laboratories: http://lammps.sandia.gov/
This article explains the LAMMPS usage on MTech HPC and how to compile LAMMPS for advanced users.
LAMMPS has been installed for all users. The installed LAMMPS includes all optional packages except KIM, REAX, VORONOI, USER-CUDA, and USER_OMP. The excluded packages are either deprecated or require additional libraries. The LAMMPS examples, potentials and other files are installed at /opt/lammps/.
There are two versions of LAMMPS installed: executable without GPU package and executables with GPU package. The CPU LAMMPS executable is lmp_openmpi, while the GPU LAMMPS executables are lmp_gpu_single_single (all single precision), lmp_gpu_single_double (mix of single and double precision), and lmp_gpu_double_double (all double precision). Any of the GPU executables include all the functionalities of the CPU executable, but the GPU executables require the CUDA library to run. The GPU LAMMPS is compiled with CUDA 8.0. Detailed information regarding the GPU calculation can be found at LAMMPS website.
Contents
Loading LAMMPS
To check available installed LAMMPS version:
- module avail lammps
To load LAMMPS module:
- module load lammps openmpi cuda
The above command will load the default version of LAMMPS as well as OpenMPI and CUDA. The LAMMPS is compiled with OpenMPI 1.6.4 and CUDA. You may experience poor performance if other version of OpenMPI is loaded.
To check the loaded modules:
- module list
Using LAMMPS
To use lammps:
- mpiexec -n 4 lmp_openmpi < lammps_input.txt
The above command uses 4 CPU processes to run LAMMPS input script lammps_input.txt.
Sample batch files for LAMMPS
For LAMMPS regular users who do not use GPU computing:
- #!/bin/sh
- #PBS -l nodes=2:ppn=32
- #PBS -N LAMMPS-CPU
- #PBS -l walltime=02:00:00
- cd $PBS_O_WORKDIR
- module purge
- module load lammps openmpi
- mpiexec -n 64 lmp_openmpi < lammps_input.txt
For LAMMPS GPU users:
- #!/bin/sh
- #PBS -l nodes=1:ppn=4
- #PBS -l feature=gpunode
- #PBS -N LAMMPS-GPU
- #PBS -l walltime=02:00:00
- cd $PBS_O_WORKDIR
- module purge
- module load lammps openmpi cuda
- mpiexec -n 4 lmp_gpu_double_double < in.gpu.phosphate
Compiling LAMMPS
The following part highlights some important steps for compiling LAMMPS source code on MTech HPC. You are suggested read the LAMMPS official installation manual before continues: http://lammps.sandia.gov/doc/Section_start.html
Compiling LAMMPS from source does not require root privilege. Compilation prerequisite: g++, gfortran, openmpi, jpeg/png lib, and/or CUDA.
Compile LAMMPS
1. Load gcc, openmpi and cuda
- module load gcc openmpi cuda
2. Make a new directory called ‘package’ at your home directory, and go to that directory:
- mkdir package
- cd package
3. At your package directory, download the LAMMPS source code (stable release)
4. Extract the LAMMPS package, and go to the created folder
- tar zxvf lammps_stable.tar.gz
- cd lammps-1Feb14
5. Build the lib
- cd lib
- 5.1 Build atc
- cd atc
- make -f Makefile.mpic++ -j 32
- cd ..
- 5.2 Build awpmd
- cd awpmd
- make -f Makefile.openmpi -j 32
- cd ..
- 5.3 Build colvars
- cd awpmd
- make -f Makefile.g++ -j 32
- cd ..
- 5.4 Build linalg
- cd linalg
- make -f Makefile.gfortran -j 32
- cd ..
- 5.5 Build meam
- cd meam
- make -f Makefile.gfortran (do not use parallel compiling -j)
- cd ..
- 5.6 Build poems
- cd poems
- make -f Makefile.g++ -j 32
- cd ..
- 5.7 Build gpu
- cd gpu
- Edit the file of Makefile.linux.
- CUDA_ARCH = -arch=sm_35
-
- You may also edit the precision flagger (CUDA_PRECISION = -D_SINGLE_DOUBLE etc.)
- make -f Makefile.linux
- cd ..
6. Go to the LAMMPS src folder, and include all packages
- cd ../../src
- make yes-all
- make no-voronoi (requires additional package)
- make no-kim (requires addition package)
- make no-user-omp (error in the code)
- make no-user-cuda (no longer maintained)
- make no-reax (deprecated, uses user-reaxc instead)
- At this point, you may exclude other packages you do not wish to include.
- Check package status
- make package-status
7. Go to the LAMMPS MAKE folder, and edit the file Makefile.openmpi
- cd MAKE
- Modify the line of LMP_INC as
- LMP_INC = -DLAMMPS_GZIP -DLAMMPS_JPEG -DLAMMPS_PNG
- Modify the JPEG/PNG lib as
- JPG_LIB = -ljpeg -lpng
- Modify the lines of FFTW as
- FFT_INC = -DFFT_FFTW3
- FFT_PATH =
- FFT_LIB = -lfftw3
8. Build the LAMMPS
- make openmpi -j 32
- The created binary file is lmp_openmpi
9. Run some test
- 9.1 Simple computation with LJ potential (this example does not require any external potential file)
- cd ../../examples/indent
- Edit the in.indent file and uncomment the following 3 lines:
- dump 2 all image 1000 image.*.jpg type type &
- zoom 1.6 adiam 1.5
- dump_modify 2 pad 5
- Run the simulation
- mpiexec -n 4 ../../src/lmp_openmpi < in.indent
- Check the folder for the created jpg snapshots
- 9.2 MEAM test (this will test the FORTRAN part and the external potential file)
- Go to examples/meam, and run
- mpiexec -n 4 ../../src/lmp_openmpi < in.meam
- 9.3 gpu test
- Go to examples/gpu, and run
- mpiexec -n 4 ../../src/lmp_openmpi < in.gpu.phosphate
- And
- ../../src/lmp_openmpi < in.gpu.rhodo