A simple dataflow engine with scalable semantics.
Pydra is a rewrite of the Nipype engine with mapping and joining as first-class operations. It forms the core of the Nipype 2.0 ecosystem.
The goal of pydra is to provide a lightweight Python dataflow engine for DAG construction, manipulation, and distributed execution.
- Python 3.11+ using type annotation and attrs
- Composable dataflows with simple node semantics. A dataflow can be a node of another dataflow.
- splitter and combiner provides many ways of compressing complex loop semantics
- Cached execution with support for a global cache across dataflows and users
- Distributed execution, presently via ConcurrentFutures, SLURM and SGE, with support for PS/IJ and Dask available via plugins
- SciPy 2020 Proceedings
- PyCon 2020 Poster
- Explore Pydra interactively (the tutorial can be also run using Binder service)
Please note that mybinder times out after an hour.
Pydra can be installed from PyPI using pip, noting that you currently need to specify the 1.0-alpha version due to a quirk of PyPI version sorting, otherwise you will end up with the old 0.25 version.
pip install -–upgrade pip pip install pydra>=1.0a
If you want to install plugins for psij or dask you can by installing the relevant plugin packages
pip install pydra-workers-psij pip install pydra-workers-dask
Task implementations for various toolkits and workflows are available in task plugins, which can be installed similarly
pip install pydra-tasks-mrtrix3 pip install pydra-tasks-fsl
Pydra requires Python 3.11+. To install in developer mode:
git clone [email protected]:nipype/pydra.git cd pydra pip install -e ".[dev]"
In order to run pydra's test locally:
pytest -vs pydra
It is also useful to install pre-commit:
pip install pre-commit pre-commit