Skip to content

t-0hmura/amber-mlips

Repository files navigation

amber-mlips

DOI

MLIP (Machine Learning Interatomic Potential) wrapper for AMBER QM/MM via sander EXTERN interface.

Four model families are currently supported:

  • UMA (fairchem) — default model: uma-s-1p1
  • ORB (orb-models) — default model: orb-v3-conservative-omol
  • MACE (mace) — default model: MACE-OMOL-0
  • AIMNet2 (aimnetcentral) — default model: aimnet2

All backends provide energy and gradient for AMBER QM/MM molecular dynamics and optimization. An optional point-charge embedding correction with xTB is available via --embedcharge.

Requires Python 3.9 or later and AmberTools (sander). AmberTools is free of charge (GNU GPL); sander / sander.MPI are LGPL 2.1.

Quick Start (Default = UMA)

  1. (Optional) Install AmberTools if not already installed. AmberTools25 or later is recommended.
conda config --add channels conda-forge
conda config --add channels dacase
conda config --set channel_priority strict
conda install ambertools-dac=25

The conda package includes sander, sander.MPI (OpenMPI), and requires Python 3.12.

  1. (Optional) Install xTB. Only needed for --embedcharge.
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

The libblas/liblapack specs prevent the BLAS library from being replaced with the slower netlib. See TECHNICAL_NOTE.md for details.

To build xTB from source (required for CPCM-X solvation via --solvent-model cpcmx):

git clone --depth 1 https://github.com/grimme-lab/xtb.git
cd xtb
cmake -B build -S . \
  -DCMAKE_BUILD_TYPE=Release \
  -DWITH_CPCMX=ON \
  -DBLAS_LIBRARIES=/path/to/libblas.so \
  -DLAPACK_LIBRARIES=/path/to/liblapack.so
make -C build tblite-lib -j8   # build tblite first to avoid a parallel build race
make -C build xtb-exe -j8

The built binary is at build/xtb. Add it to your PATH or use --xtb-cmd /path/to/build/xtb. For CPCM-X, set CPXHOME to the CPCM-X source directory (e.g., build/_deps/cpcmx-src/). Requires GCC >= 10 (gfortran 8 causes internal compiler errors). See also: https://github.com/grimme-lab/xtb, https://github.com/grimme-lab/CPCM-X

  1. Install PyTorch suitable for your CUDA environment.
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
  1. Install the package with the UMA backend. For ORB/MACE/AIMNet2, replace uma accordingly.
pip install "amber-mlips[uma]"
  1. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli login

UMA model is on Hugging Face Hub. You need to log in once (See https://github.com/facebookresearch/fairchem):

  1. Prepare an AMBER input file. Only qm_theory and ml_keywords are plugin-specific; everything else is native AMBER &qmmm. For examples, see inputs in examples/*.in.
 &cntrl
  imin=0, irest=0, ntx=1,
  nstlim=1000, dt=0.001,
  ntb=0, ntt=3, gamma_ln=5.0,
  ntpr=10, ntwx=10, ntwr=100,
  ifqnt=1,
 /
 &qmmm
  qmmask=':2',
  qmcharge=0,
  spin=1,
  qm_theory='uma',
  ml_keywords='--model uma-s-1p1',
  qmcut=12.0,
  qmshake=0,
 /

Other backends:

  qm_theory='orb',    ml_keywords='--model orb-v3-conservative-omol',
  qm_theory='mace',   ml_keywords='--model MACE-OMOL-0',
  qm_theory='aimnet2', ml_keywords='--model aimnet2',
  1. Run with amber-mlips and standard sander-like flags.
amber-mlips -O \
  -i mlmm.in -o mlmm.out \
  -p leap.parm7 -c md.rst7 \
  -r mlmm.rst7 -x mlmm.nc -inf mlmm.info

Point-Charge Embedding Correction (xTB)

--embedcharge adds an xTB-based correction for electrostatic embedding of MM point charges into the QM region.

Install xTB (if not already installed in Quick Start step 1):

conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

Use --embedcharge in ml_keywords:

  ml_keywords='--model uma-s-1p1 --embedcharge',

This computes dE = E_xTB(embed) - E_xTB(no-embed) and adds the correction to MLIP energy and forces.

ML-Only MD (Full-System MLIP)

See the ML-Only MD section in OPTIONS.md for full-system MLIP molecular dynamics (qmmask='@*') with implicit solvent (non-periodic only).

MM MPI Parallelism

The ML evaluation path is always single-process. The MM side (sander) can use MPI:

amber-mlips --mm-ranks 16 -O -i mlmm.in -o mlmm.out -p leap.parm7 -c md.rst7 -r mlmm.rst7
  • --mm-ranks 1 (default): runs sander directly.
  • --mm-ranks > 1: uses mpirun/mpiexec + sander.MPI. Requires AmberTools built with MPI support.

Note: AMBER 24 (and earlier) has a bug in qm2_extern_module.F90 that corrupts forces in multi-rank EXTERN runs. Use AmberTools 25 or later for --mm-ranks > 1.
Also place --mm-ranks between amber-mlips and -O (e.g., amber-mlips --mm-ranks 16 -O ...).

Installing Model Families

pip install "amber-mlips[uma]"         # UMA (default)
pip install "amber-mlips[orb]"         # ORB
pip install "amber-mlips[mace]"        # MACE
pip install "amber-mlips[aimnet2]"     # AIMNet2
pip install amber-mlips                # core only (no ML backend)

Note: UMA and MACE have a dependency conflict (e3nn). Use separate environments.

Local install:

git clone https://github.com/t-0hmura/amber-mlips.git
cd amber-mlips
pip install -e ".[uma]"

Model download notes:

  • UMA: Hosted on Hugging Face Hub. Run huggingface-cli login once.
  • ORB / MACE / AIMNet2: Downloaded automatically on first use.

Examples

Ready-to-run examples are in the examples/ directory with a protein-ligand system (1IL4, 50,387 atoms, 115 QM atoms).

File Backend Description
uma.in UMA uma-s-1p1
orb.in ORB orb-v3-conservative-omol
mace.in MACE MACE-OMOL-0
aimnet2.in AIMNet2 aimnet2
uma_embedcharge.in UMA uma-s-1p1 + xTB embedcharge
uma_mlonly_implicit.in UMA ML-only + xTB implicit solvent (non-periodic, ALPB)

UMA, ORB, and AIMNet2 can share one environment; MACE requires a separate one (see Installing Model Families). Run the example matching your installed backend:

cd examples
amber-mlips --mm-ranks 16 -O -i uma.in -o uma.out -p leap.parm7 -c md.rst7 -r uma.rst7

Performance Reference

Benchmark on a protein-ligand system (1IL4, 50,387 atoms, 115 ML-region atoms):

UMA UMA + embedcharge
Model uma-s-1p1 uma-s-1p1 --embedcharge
Total atoms 50,387 50,387
ML region atoms 115 115
dt 0.0005 ps 0.0005 ps
Per step ~135 ms ~579 ms
Speed ~321 ps/day ~75 ps/day

Environment: AMD Ryzen 7950X3D / 4.20 GHz (32 threads) + RTX 5080 (VRAM 16 GB), RAM 128 GB. --mm-ranks 16 used for MM MPI parallelism.

Upstream Model Sources

Advanced Options

See OPTIONS.md for all wrapper and backend-specific options. For internal architecture details, see TECHNICAL_NOTE.md.

Troubleshooting

  • amber-mlips command not found — Activate the conda/venv environment where the package is installed.
  • sander not found — Install AmberTools (conda install ambertools-dac=25), or use --sander-bin /path/to/sander.
  • UMA model download fails (401/403) — Run huggingface-cli login. Some models require access approval on Hugging Face.
  • MPI errors with --mm-ranks > 1 — Ensure mpirun/mpiexec is available. Use --mpi-bin to specify explicitly.
  • Works interactively but fails in batch jobs — Use --sander-bin with an absolute path.

References

Citation

If you use this package, please cite:

@software{ohmura2026ambermlips,
  author       = {Ohmura, Takuto},
  title        = {amber-mlips},
  year         = {2026},
  version      = {1.2.1},
  url          = {https://github.com/t-0hmura/amber-mlips},
  license      = {MIT},
  doi          = {10.5281/zenodo.19197776}
}

About

MLIP (Machine Learning Interatomic Potential) plugins for ML/MM MD simulations with AmberTools25.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors