LAMMPS

Jump to: navigation, search


LAMMPSlogo.png

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.

LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. Many of its models have versions that provide accelerated performance on CPUs, GPUs, and Intel Xeon Phis.




New to Kogence?

  1. Why Kogence?: Watch the Video
  2. How Kogence Woks?: Watch the Video
  3. Step by step instruction on how to execute your first simulation model on kogence: How To Use Kogence

Access on Kogence

On Kogence, LAMMPS is pre-configured and pre-optimized and is available to be run on a single cloud HPC server of your choice or on an autoscaling cloud HPC cluster. On Kogence, LAMMPS runs in a docker container. LAMMPS is available for free under all Kogence account types.


LAMMPS Versions on Kogence

  • 11 Aug, 2017

Single Node Invocation Options

For using LAMMPS on Kogence, you would first create or copy an existing LAMMPS model. You can see list of ready to copy and execute LAMMPS model on Kogence here: [[:Category:LAMMPS]]. Step by step instructions for creating and copying models is here How To Use Kogence.

Open your model, go to Cluster tab. Choose your cloud HPC hardware. Then go to Stack tab and click + button to add a software. Search and select LAMMPS to add the LAMMPS software container to the software stack of your HPC cluster. Then you specify the entrypoint binary you want to invoke from the dropdown menu. Then specify the invocation options on the empty textbox. For example, with lammps-daily entrypoint binary, you can specify following options on the textbox:

  1. Using GUI
    1. LAMMPS does not have any GUI functionality. You can use other visualization software on kogence to chart and visualize LAMMPS results
  2. Using LAMMPS as REPL Terminal
    1. LAMMPS does not have any REPL functionality.
  3. Using Solvers in Batch Mode
    1. -in YourLAMMPSScript: Executes YourLAMMPSScript in batch mode.
  4. Host Machine Shell Terminal Access: In the Stack tab, select the container LAMMPS. Do not select any entrypoint binary. This will make the solver docker containers available in your workflow to be invoked from either shell terminal, bash scripts or from inside of other solver docker containers. Then click the + button and add another software CloudShell. Choose the shell-terminal as the entrypoint binary and type the name of terminal emulator of your choice in the empty textbox.. Please see CloudShell for specifics of that software application. When you run the model, you can go to Visualizer tab and you will see your shell terminal emulator. Here you can invoke LAMMPS as usual by typing lammps-daily -in YourLAMMPSScript on the terminal.
  5. Bash Shell Script Executions: In the Stack tab, select the container LAMMPS. Do not select any entrypoint binary. This will make the solver docker containers available in your workflow to be invoked from either shell terminal, bash scripts or from inside of other solver docker containers. Then click the + button and add another software CloudShell. Choose the bash-shell as the entrypoint binary and type the name of your shell script in the empty textbox. Your script can call lammps-daily binary as usual. Please see CloudShell for specifics of that software application. Make sure you add/upload your bash script under the Files tab of your model before running the model. When you run the model, your script will run automatically. As usual, once all graphical programs terminate, Kogence will make sure that your machine also automatically terminates and we will stop charging compute credits from your compute plan once machine terminates.

Autoscaling Cluster Invocation Options

On Kogence, you can easily send large jobs or a set of parallel jobs to an HPC cluster that automatically scales up and down depending upon the workload submitted. All you have to do is to select the "Run on Autoscaling Cluster" checkbox in the Cluster tab of your model and also on the Stack tab of your model mark the commands that you want to send to cluster.

  1. Invocation Options on the Master Node of the HPC Cluster: Please note that all of the invocation methods discussed in previous section (Single Node Invocation Options) are still valid but all of those will execute on the master node and will not be submitted to scheduler managing the cluster. To send jobs to the scheduler to schedule them on the compute nodes of the cluster use methods described below.
  1. Scheduling Batch Mode Jobs Using Kogence Stack Tab Console: All you have to do is to select the "Run on Autoscaling Cluster" checkbox in the Cluster tab of your model and also on the Stack tab of your model mark the commands that you want to send to cluster. As compute nodes only suports batch mode executions, following invocation options can be used.
    1. -in YourLAMMPSScript: Executes YourLAMMPSScript in batch mode.

Combining Pre-Processing, Cluster Computing, Post Processing in Workflow

A typical scientific simulation workflow involves three steps. STEP 1: pre-processing using interactive CAD environment on master node. STEP2: Sending solver jobs to multiple worker nodes in non-interactive non-graphical batch mode. STEP3: Post-processing suing CAD environment on master node. Furthermore, pre-processing, post-processing and solvers may all be different software packages. Kogence Stack tab allows you to configure all such complex workflows at ease.

On the Cluster tab, choose autoscaling cluster option. On the Stack tab, first you will select CAD programs. Do not mark run-on-cluster checkbox. These will come up in master node. Next you will add batch mode solver command. Mark run-on-cluster checkbox. Do not mark run-with-previous checkbox. Your command will automatically get scheduled to compute nodes of your cluster as soon as you close your graphical CAD program. Number of compute nodes required will be automatically determined and mode cloud HPC servers will be booted up and added to your cluster if needed. Then you will select post-processing program. Do not mark run-on-cluster checkbox. Do not mark run-with-previous checkbox. Then once your batch model solver command exits, your post processing CAD programs will kick in on the master node and compute nodes would automatically scale down/terminate.

Combining Multiple Software in Workflow

You may want to pre-process you model using different software such as Dassault systems' SolidWorks, run lammps-daily solver in batch mode on a cluster and then post-process your data in different software such as [[LAMMPS]]. Kogence allows you to build such workflows on Stack tab. Please note that on Kogence all software and solvers are deployed on docker containers. Once you select the containers you want, Kogence automatically composes them so you can call one program from another just like you would on your onprem workstation or on your desktop. Therefore, before you start calling programs in shell terminal or in your bash script make sure you selected all software and solvers that your bash script needs otherwise those solvers would not be available to be used in the bash script. You can skip automatically invoking them through Kogence Stack tab by not providing an inputs after the solver name/binary name. This way you can make them available to your bash script and then invoke them from your bash script.

  1. Blocking vs No Blocking Executions: By default all software selected in Stack tab work in blocking model. Meaning second software/solver will not start until first one finished. If you want them to get started simultaneously then mark the run-with-previous checkbox.

Output and Error Logs

On Kogence, stdout is automatically saved under Files tab of your model in a file called ___titusiOut. Similarly, stderr is automatically saved in another file called ___titusiError. While your model is running, you will see Visualizer button being active in the top right corner of the NavBar. You will see errors and outputs being printed live on the screen. Once simulation ends and machine/cluster terminates, the Visualizer will become inactive. Updated ___titusiOut and___titusiError files will automatically sync back to the app webserver and you will can view these updated files under the Files tab of your model.

Model Input and Result Files

You can upload and edit your input files under Files tab of your model. Similarly,, model outputs and results are also saved under Files tab of your model. Please make sure that you upload and edit your model files before launching the simulation. Once simulation is launched, editing and uploading under Files tab is locked and Kogence web app is connected with high spared NFS on both the master and the worker nodes. So your data is automatically shared in live mode between all your master and worker nodes. Once simulation has been launched, and if requested a terminal shell program such as xterm or gnome-terminal under software->settings, then you can use the shell terminal to check all your model inputs and outputs in you job folder. After simulation ends, the lock on the Files tab would be removed and you can see the updated files under the Files tab again.

References

  1. http://lammps.sandia.gov/