1. Introduction

This section provides an overview of what LAMMPS can and can’t do, describes what it means for LAMMPS to be an open-source code, and acknowledges the funding and people who have contributed to LAMMPS over the years.

1.1. What is LAMMPS

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.

For examples of LAMMPS simulations, see the Publications page of the LAMMPS WWW Site.

LAMMPS runs efficiently on single-processor desktop or laptop machines, but is designed for parallel computers. It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. This includes distributed- or shared-memory parallel machines and Beowulf-style clusters.

LAMMPS can model systems with only a few particles up to millions or billions. See Section 8 for information on LAMMPS performance and scalability, or the Benchmarks section of the LAMMPS WWW Site.

LAMMPS is a freely-available open-source code, distributed under the terms of the GNU Public License, which means you can use or modify the code however you wish. See this section for a brief discussion of the open-source philosophy.

LAMMPS is designed to be easy to modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics. See Section 10 for more details.

The current version of LAMMPS is written in C++. Earlier versions were written in F77 and F90. See Section 13 for more information on different versions. All versions can be downloaded from the LAMMPS WWW Site.

LAMMPS was originally developed under a US Department of Energy CRADA (Cooperative Research and Development Agreement) between two DOE labs and 3 companies. It is distributed by Sandia National Labs. See this section for more information on LAMMPS funding and individuals who have contributed to LAMMPS.

In the most general sense, LAMMPS integrates Newton’s equations of motion for collections of atoms, molecules, or macroscopic particles that interact via short- or long-range forces with a variety of initial and/or boundary conditions. For computational efficiency LAMMPS uses neighbor lists to keep track of nearby particles. The lists are optimized for systems with particles that are repulsive at short distances, so that the local density of particles never becomes too large. On parallel machines, LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. Processors communicate and store “ghost” atom information for atoms that border their sub-domain. LAMMPS is most efficient (in a parallel sense) for systems whose particles fill a 3d rectangular box with roughly uniform density. Papers with technical details of the algorithms used in LAMMPS are listed in this section.


1.2. LAMMPS features

This section highlights LAMMPS features, with pointers to specific commands which give more details. If LAMMPS doesn’t have your favorite interatomic potential, boundary condition, or atom type, see Section 10, which describes how you can add it to LAMMPS.

1.2.1. General features

  • runs on a single processor or in parallel
  • distributed-memory message-passing parallelism (MPI)
  • spatial-decomposition of simulation domain for parallelism
  • open-source distribution
  • highly portable C++
  • optional libraries used: MPI and single-processor FFT
  • GPU (CUDA and OpenCL), Intel(R) Xeon Phi(TM) coprocessors, and OpenMP support for many code features
  • easy to extend with new features and functionality
  • runs from an input script
  • syntax for defining and using variables and formulas
  • syntax for looping over runs and breaking out of loops
  • run one or multiple simulations simultaneously (in parallel) from one script
  • build as library, invoke LAMMPS thru library interface or provided Python wrapper
  • couple with other codes: LAMMPS calls other code, other code calls LAMMPS, umbrella code calls both

1.2.2. Particle and model types

(atom style command)

  • atoms
  • coarse-grained particles (e.g. bead-spring polymers)
  • united-atom polymers or organic molecules
  • all-atom polymers, organic molecules, proteins, DNA
  • metals
  • granular materials
  • coarse-grained mesoscale models
  • finite-size spherical and ellipsoidal particles
  • finite-size line segment (2d) and triangle (3d) particles
  • point dipole particles
  • rigid collections of particles
  • hybrid combinations of these

1.2.3. Force fields

(pair style, bond style, angle style, dihedral style, improper style, kspace style commands)

  • pairwise potentials: Lennard-Jones, Buckingham, Morse, Born-Mayer-Huggins, Yukawa, soft, class 2 (COMPASS), hydrogen bond, tabulated
  • charged pairwise potentials: Coulombic, point-dipole
  • manybody potentials: EAM, Finnis/Sinclair EAM, modified EAM (MEAM), embedded ion method (EIM), EDIP, ADP, Stillinger-Weber, Tersoff, REBO, AIREBO, ReaxFF, COMB, SNAP, Streitz-Mintmire, 3-body polymorphic
  • long-range interactions for charge, point-dipoles, and LJ dispersion: Ewald, Wolf, PPPM (similar to particle-mesh Ewald)
  • polarization models: QEq, core/shell model, Drude dipole model
  • charge equilibration (QEq via dynamic, point, shielded, Slater methods)
  • coarse-grained potentials: DPD, GayBerne, REsquared, colloidal, DLVO
  • mesoscopic potentials: granular, Peridynamics, SPH
  • electron force field (eFF, AWPMD)
  • bond potentials: harmonic, FENE, Morse, nonlinear, class 2, quartic (breakable)
  • angle potentials: harmonic, CHARMM, cosine, cosine/squared, cosine/periodic, class 2 (COMPASS)
  • dihedral potentials: harmonic, CHARMM, multi-harmonic, helix, class 2 (COMPASS), OPLS
  • improper potentials: harmonic, cvff, umbrella, class 2 (COMPASS)
  • polymer potentials: all-atom, united-atom, bead-spring, breakable
  • water potentials: TIP3P, TIP4P, SPC
  • implicit solvent potentials: hydrodynamic lubrication, Debye
  • force-field compatibility with common CHARMM, AMBER, DREIDING, OPLS, GROMACS, COMPASS options
  • access to KIM archive of potentials via pair kim
  • hybrid potentials: multiple pair, bond, angle, dihedral, improper potentials can be used in one simulation
  • overlaid potentials: superposition of multiple pair potentials

1.2.4. Atom creation

(read_data, lattice, create_atoms, delete_atoms, displace_atoms, replicate commands)

  • read in atom coords from files
  • create atoms on one or more lattices (e.g. grain boundaries)
  • delete geometric or logical groups of atoms (e.g. voids)
  • replicate existing atoms multiple times
  • displace atoms

1.2.5. Ensembles, constraints, and boundary conditions

(fix command)

  • 2d or 3d systems
  • orthogonal or non-orthogonal (triclinic symmetry) simulation domains
  • constant NVE, NVT, NPT, NPH, Parinello/Rahman integrators
  • thermostatting options for groups and geometric regions of atoms
  • pressure control via Nose/Hoover or Berendsen barostatting in 1 to 3 dimensions
  • simulation box deformation (tensile and shear)
  • harmonic (umbrella) constraint forces
  • rigid body constraints
  • SHAKE bond and angle constraints
  • Monte Carlo bond breaking, formation, swapping
  • atom/molecule insertion and deletion
  • walls of various kinds
  • non-equilibrium molecular dynamics (NEMD)
  • variety of additional boundary conditions and constraints

1.2.6. Integrators

(run, run_style, minimize commands)

  • velocity-Verlet integrator
  • Brownian dynamics
  • rigid body integration
  • energy minimization via conjugate gradient or steepest descent relaxation
  • rRESPA hierarchical timestepping
  • rerun command for post-processing of dump files

1.2.7. Diagnostics

  • see the various flavors of the fix and compute commands

1.2.8. Output

(dump, restart commands)

  • log file of thermodynamic info
  • text dump files of atom coords, velocities, other per-atom quantities
  • binary restart files
  • parallel I/O of dump and restart files
  • per-atom quantities (energy, stress, centro-symmetry parameter, CNA, etc)
  • user-defined system-wide (log file) or per-atom (dump file) calculations
  • spatial and time averaging of per-atom quantities
  • time averaging of system-wide quantities
  • atom snapshots in native, XYZ, XTC, DCD, CFG formats

1.2.10. Pre- and post-processing

  • Various pre- and post-processing serial tools are packaged with LAMMPS; see these doc pages.
  • Our group has also written and released a separate toolkit called Pizza.py which provides tools for doing setup, analysis, plotting, and visualization for LAMMPS simulations. Pizza.py is written in Python and is available for download from the Pizza.py WWW site.

1.2.11. Specialized features

These are LAMMPS capabilities which you may not think of as typical molecular dynamics options:


1.3. LAMMPS non-features

LAMMPS is designed to efficiently compute Newton’s equations of motion for a system of interacting particles. Many of the tools needed to pre- and post-process the data for such simulations are not included in the LAMMPS kernel for several reasons:

  • the desire to keep LAMMPS simple
  • they are not parallel operations
  • other codes already do them
  • limited development resources

Specifically, LAMMPS itself does not:

  • run thru a GUI
  • build molecular systems
  • assign force-field coefficients automagically
  • perform sophisticated analyses of your MD simulation
  • visualize your MD simulation
  • plot your output data

A few tools for pre- and post-processing tasks are provided as part of the LAMMPS package; they are described in this section. However, many people use other codes or write their own tools for these tasks.

As noted above, our group has also written and released a separate toolkit called Pizza.py which addresses some of the listed bullets. It provides tools for doing setup, analysis, plotting, and visualization for LAMMPS simulations. Pizza.py is written in Python and is available for download from the Pizza.py WWW site.

LAMMPS requires as input a list of initial atom coordinates and types, molecular topology information, and force-field coefficients assigned to all atoms and bonds. LAMMPS will not build molecular systems and assign force-field parameters for you.

For atomic systems LAMMPS provides a create_atoms command which places atoms on solid-state lattices (fcc, bcc, user-defined, etc). Assigning small numbers of force field coefficients can be done via the pair coeff, bond coeff, angle coeff, etc commands. For molecular systems or more complicated simulation geometries, users typically use another code as a builder and convert its output to LAMMPS input format, or write their own code to generate atom coordinate and molecular topology for LAMMPS to read in.

For complicated molecular systems (e.g. a protein), a multitude of topology information and hundreds of force-field coefficients must typically be specified. We suggest you use a program like CHARMM or AMBER or other molecular builders to setup such problems and dump its information to a file. You can then reformat the file as LAMMPS input. Some of the tools in this section can assist in this process.

Similarly, LAMMPS creates output files in a simple format. Most users post-process these files with their own analysis tools or re-format them for input into other programs, including visualization packages. If you are convinced you need to compute something on-the-fly as LAMMPS runs, see Section 10 for a discussion of how you can use the dump and compute and fix commands to print out data of your choosing. Keep in mind that complicated computations can slow down the molecular dynamics timestepping, particularly if the computations are not parallel, so it is often better to leave such analysis to post-processing codes.

A very simple (yet fast) visualizer is provided with the LAMMPS package - see the xmovie tool in this section. It creates xyz projection views of atomic coordinates and animates them. We find it very useful for debugging purposes. For high-quality visualization we recommend the following packages:

Other features that LAMMPS does not yet (and may never) support are discussed in Section 13.

Finally, these are freely-available molecular dynamics codes, most of them parallel, which may be well-suited to the problems you want to model. They can also be used in conjunction with LAMMPS to perform complementary modeling tasks.

CHARMM, AMBER, NAMD, NWCHEM, and Tinker are designed primarily for modeling biological molecules. CHARMM and AMBER use atom-decomposition (replicated-data) strategies for parallelism; NAMD and NWCHEM use spatial-decomposition approaches, similar to LAMMPS. Tinker is a serial code. DL_POLY includes potentials for a variety of biological and non-biological materials; both a replicated-data and spatial-decomposition version exist.


1.4. Open source distribution

LAMMPS comes with no warranty of any kind. As each source file states in its header, it is a copyrighted code that is distributed free-of- charge, under the terms of the GNU Public License (GPL). This is often referred to as open-source distribution - see www.gnu.org or www.opensource.org for more details. The legal text of the GPL is in the LICENSE file that is included in the LAMMPS distribution.

Here is a summary of what the GPL means for LAMMPS users:

(1) Anyone is free to use, modify, or extend LAMMPS in any way they choose, including for commercial purposes.

(2) If you distribute a modified version of LAMMPS, it must remain open-source, meaning you distribute it under the terms of the GPL. You should clearly annotate such a code as a derivative version of LAMMPS.

(3) If you release any code that includes LAMMPS source code, then it must also be open-sourced, meaning you distribute it under the terms of the GPL.

(4) If you give LAMMPS files to someone else, the GPL LICENSE file and source file headers (including the copyright and GPL notices) should remain part of the code.

In the spirit of an open-source code, these are various ways you can contribute to making LAMMPS better. You can send email to the developers on any of these items.

  • Point prospective users to the LAMMPS WWW Site. Mention it in talks or link to it from your WWW site.
  • If you find an error or omission in this manual or on the LAMMPS WWW Site, or have a suggestion for something to clarify or include, send an email to the developers.
  • If you find a bug, Section 12.2 describes how to report it.
  • If you publish a paper using LAMMPS results, send the citation (and any cool pictures or movies if you like) to add to the Publications, Pictures, and Movies pages of the LAMMPS WWW Site, with links and attributions back to you.
  • Create a new Makefile.machine that can be added to the src/MAKE directory.
  • The tools sub-directory of the LAMMPS distribution has various stand-alone codes for pre- and post-processing of LAMMPS data. More details are given in Section 9. If you write a new tool that users will find useful, it can be added to the LAMMPS distribution.
  • LAMMPS is designed to be easy to extend with new code for features like potentials, boundary conditions, diagnostic computations, etc. This section gives details. If you add a feature of general interest, it can be added to the LAMMPS distribution.
  • The Benchmark page of the LAMMPS WWW Site lists LAMMPS performance on various platforms. The files needed to run the benchmarks are part of the LAMMPS distribution. If your machine is sufficiently different from those listed, your timing data can be added to the page.
  • You can send feedback for the User Comments page of the LAMMPS WWW Site. It might be added to the page. No promises.
  • Cash. Small denominations, unmarked bills preferred. Paper sack OK. Leave on desk. VISA also accepted. Chocolate chip cookies encouraged.

1.5. Acknowledgments and citations

LAMMPS development has been funded by the US Department of Energy (DOE), through its CRADA, LDRD, ASCI, and Genomes-to-Life programs and its OASCR and OBER offices.

Specifically, work on the latest version was funded in part by the US Department of Energy’s Genomics:GTL program (www.doegenomestolife.org) under the project, “Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling”.

The following paper describe the basic parallel algorithms used in LAMMPS. If you use LAMMPS results in your published work, please cite this paper and include a pointer to the LAMMPS WWW Site (http://lammps.sandia.gov):

S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995).

Other papers describing specific algorithms used in LAMMPS are listed under the Citing LAMMPS link of the LAMMPS WWW page.

The Publications link on the LAMMPS WWW page lists papers that have cited LAMMPS. If your paper is not listed there for some reason, feel free to send us the info. If the simulations in your paper produced cool pictures or animations, we’ll be pleased to add them to the Pictures or Movies pages of the LAMMPS WWW site.

The core group of LAMMPS developers is at Sandia National Labs:

  • Steve Plimpton, sjplimp at sandia.gov
  • Aidan Thompson, athomps at sandia.gov
  • Paul Crozier, pscrozi at sandia.gov

The following folks are responsible for significant contributions to the code, or other aspects of the LAMMPS development effort. Many of the packages they have written are somewhat unique to LAMMPS and the code would not be as general-purpose as it is without their expertise and efforts.

  • Axel Kohlmeyer (Temple U), akohlmey at gmail.com, SVN and Git repositories, indefatigable mail list responder, USER-CG-CMM and USER-OMP packages
  • Roy Pollock (LLNL), Ewald and PPPM solvers
  • Mike Brown (ORNL), brownw at ornl.gov, GPU package
  • Greg Wagner (Sandia), gjwagne at sandia.gov, MEAM package for MEAM potential
  • Mike Parks (Sandia), mlparks at sandia.gov, PERI package for Peridynamics
  • Rudra Mukherjee (JPL), Rudranarayan.M.Mukherjee at jpl.nasa.gov, POEMS package for articulated rigid body motion
  • Reese Jones (Sandia) and collaborators, rjones at sandia.gov, USER-ATC package for atom/continuum coupling
  • Ilya Valuev (JIHT), valuev at physik.hu-berlin.de, USER-AWPMD package for wave-packet MD
  • Christian Trott (U Tech Ilmenau), christian.trott at tu-ilmenau.de, USER-CUDA package
  • Andres Jaramillo-Botero (Caltech), ajaramil at wag.caltech.edu, USER-EFF package for electron force field
  • Christoph Kloss (JKU), Christoph.Kloss at jku.at, USER-LIGGGHTS package for granular models and granular/fluid coupling
  • Metin Aktulga (LBL), hmaktulga at lbl.gov, USER-REAXC package for C version of ReaxFF
  • Georg Gunzenmuller (EMI), georg.ganzenmueller at emi.fhg.de, USER-SPH package

As discussed in Section 13, LAMMPS originated as a cooperative project between DOE labs and industrial partners. Folks involved in the design and testing of the original version of LAMMPS were the following:

  • John Carpenter (Mayo Clinic, formerly at Cray Research)
  • Terry Stouch (Lexicon Pharmaceuticals, formerly at Bristol Myers Squibb)
  • Steve Lustig (Dupont)
  • Jim Belak (LLNL)