MLIP (Machine Learning Interatomic Potential) wrapper for AMBER QM/MM via sander EXTERN interface.
Four model families are currently supported:
- UMA (fairchem) — default model:
uma-s-1p1 - ORB (orb-models) — default model:
orb-v3-conservative-omol - MACE (mace) — default model:
MACE-OMOL-0 - AIMNet2 (aimnetcentral) — default model:
aimnet2
All backends provide energy and gradient for AMBER QM/MM molecular dynamics and optimization.
An optional point-charge embedding correction with xTB is available via --embedcharge.
Requires Python 3.9 or later and AmberTools (sander).
AmberTools is free of charge (GNU GPL); sander / sander.MPI are LGPL 2.1.
- (Optional) Install AmberTools if not already installed. AmberTools25 or later is recommended.
conda config --add channels conda-forge
conda config --add channels dacase
conda config --set channel_priority strict
conda install ambertools-dac=25The conda package includes sander, sander.MPI (OpenMPI), and requires Python 3.12.
- (Optional) Install xTB. Only needed for
--embedcharge.
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"The libblas/liblapack specs prevent the BLAS library from being replaced with the slower netlib. See TECHNICAL_NOTE.md for details.
To build xTB from source (required for CPCM-X solvation via --solvent-model cpcmx):
git clone --depth 1 https://github.com/grimme-lab/xtb.git
cd xtb
cmake -B build -S . \
-DCMAKE_BUILD_TYPE=Release \
-DWITH_CPCMX=ON \
-DBLAS_LIBRARIES=/path/to/libblas.so \
-DLAPACK_LIBRARIES=/path/to/liblapack.so
make -C build tblite-lib -j8 # build tblite first to avoid a parallel build race
make -C build xtb-exe -j8The built binary is at build/xtb. Add it to your PATH or use --xtb-cmd /path/to/build/xtb.
For CPCM-X, set CPXHOME to the CPCM-X source directory (e.g., build/_deps/cpcmx-src/).
Requires GCC >= 10 (gfortran 8 causes internal compiler errors).
See also: https://github.com/grimme-lab/xtb, https://github.com/grimme-lab/CPCM-X
- Install PyTorch suitable for your CUDA environment.
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129- Install the package with the UMA backend. For ORB/MACE/AIMNet2, replace
umaaccordingly.
pip install "amber-mlips[uma]"- Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli loginUMA model is on Hugging Face Hub. You need to log in once (See https://github.com/facebookresearch/fairchem):
- Prepare an AMBER input file. Only
qm_theoryandml_keywordsare plugin-specific; everything else is native AMBER&qmmm. For examples, see inputs in examples/*.in.
&cntrl
imin=0, irest=0, ntx=1,
nstlim=1000, dt=0.001,
ntb=0, ntt=3, gamma_ln=5.0,
ntpr=10, ntwx=10, ntwr=100,
ifqnt=1,
/
&qmmm
qmmask=':2',
qmcharge=0,
spin=1,
qm_theory='uma',
ml_keywords='--model uma-s-1p1',
qmcut=12.0,
qmshake=0,
/
Other backends:
qm_theory='orb', ml_keywords='--model orb-v3-conservative-omol',
qm_theory='mace', ml_keywords='--model MACE-OMOL-0',
qm_theory='aimnet2', ml_keywords='--model aimnet2',
- Run with
amber-mlipsand standardsander-like flags.
amber-mlips -O \
-i mlmm.in -o mlmm.out \
-p leap.parm7 -c md.rst7 \
-r mlmm.rst7 -x mlmm.nc -inf mlmm.info--embedcharge adds an xTB-based correction for electrostatic embedding of MM point charges into the QM region.
Install xTB (if not already installed in Quick Start step 1):
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"Use --embedcharge in ml_keywords:
ml_keywords='--model uma-s-1p1 --embedcharge',
This computes dE = E_xTB(embed) - E_xTB(no-embed) and adds the correction to MLIP energy and forces.
See the ML-Only MD section in OPTIONS.md for full-system MLIP molecular dynamics (qmmask='@*') with implicit solvent (non-periodic only).
The ML evaluation path is always single-process. The MM side (sander) can use MPI:
amber-mlips --mm-ranks 16 -O -i mlmm.in -o mlmm.out -p leap.parm7 -c md.rst7 -r mlmm.rst7--mm-ranks 1(default): runssanderdirectly.--mm-ranks > 1: usesmpirun/mpiexec+sander.MPI. Requires AmberTools built with MPI support.
Note: AMBER 24 (and earlier) has a bug in
qm2_extern_module.F90that corrupts forces in multi-rank EXTERN runs. Use AmberTools 25 or later for--mm-ranks > 1.
Also place--mm-ranksbetweenamber-mlipsand-O(e.g.,amber-mlips --mm-ranks 16 -O ...).
pip install "amber-mlips[uma]" # UMA (default)
pip install "amber-mlips[orb]" # ORB
pip install "amber-mlips[mace]" # MACE
pip install "amber-mlips[aimnet2]" # AIMNet2
pip install amber-mlips # core only (no ML backend)Note: UMA and MACE have a dependency conflict (
e3nn). Use separate environments.
Local install:
git clone https://github.com/t-0hmura/amber-mlips.git
cd amber-mlips
pip install -e ".[uma]"Model download notes:
- UMA: Hosted on Hugging Face Hub. Run
huggingface-cli loginonce. - ORB / MACE / AIMNet2: Downloaded automatically on first use.
Ready-to-run examples are in the examples/ directory with a protein-ligand system (1IL4, 50,387 atoms, 115 QM atoms).
| File | Backend | Description |
|---|---|---|
uma.in |
UMA | uma-s-1p1 |
orb.in |
ORB | orb-v3-conservative-omol |
mace.in |
MACE | MACE-OMOL-0 |
aimnet2.in |
AIMNet2 | aimnet2 |
uma_embedcharge.in |
UMA | uma-s-1p1 + xTB embedcharge |
uma_mlonly_implicit.in |
UMA | ML-only + xTB implicit solvent (non-periodic, ALPB) |
UMA, ORB, and AIMNet2 can share one environment; MACE requires a separate one (see Installing Model Families). Run the example matching your installed backend:
cd examples
amber-mlips --mm-ranks 16 -O -i uma.in -o uma.out -p leap.parm7 -c md.rst7 -r uma.rst7Benchmark on a protein-ligand system (1IL4, 50,387 atoms, 115 ML-region atoms):
| UMA | UMA + embedcharge | |
|---|---|---|
| Model | uma-s-1p1 |
uma-s-1p1 --embedcharge |
| Total atoms | 50,387 | 50,387 |
| ML region atoms | 115 | 115 |
| dt | 0.0005 ps | 0.0005 ps |
| Per step | ~135 ms | ~579 ms |
| Speed | ~321 ps/day | ~75 ps/day |
Environment: AMD Ryzen 7950X3D / 4.20 GHz (32 threads) + RTX 5080 (VRAM 16 GB), RAM 128 GB.
--mm-ranks 16 used for MM MPI parallelism.
- UMA / FAIR-Chem: https://github.com/facebookresearch/fairchem
- ORB / orb-models: https://github.com/orbital-materials/orb-models
- MACE: https://github.com/ACEsuit/mace
- AIMNet2: https://github.com/isayevlab/aimnetcentral
See OPTIONS.md for all wrapper and backend-specific options.
For internal architecture details, see TECHNICAL_NOTE.md.
amber-mlipscommand not found — Activate the conda/venv environment where the package is installed.sandernot found — Install AmberTools (conda install ambertools-dac=25), or use--sander-bin /path/to/sander.- UMA model download fails (401/403) — Run
huggingface-cli login. Some models require access approval on Hugging Face. - MPI errors with
--mm-ranks > 1— Ensurempirun/mpiexecis available. Use--mpi-binto specify explicitly. - Works interactively but fails in batch jobs — Use
--sander-binwith an absolute path.
- AMBER24 manual (detailed MD settings): https://ambermd.org/doc12/Amber24.pdf
If you use this package, please cite:
@software{ohmura2026ambermlips,
author = {Ohmura, Takuto},
title = {amber-mlips},
year = {2026},
version = {1.2.1},
url = {https://github.com/t-0hmura/amber-mlips},
license = {MIT},
doi = {10.5281/zenodo.19197776}
}