2025-09-16 Under Development. See https://github.com/ll7/robot_sf_ll7/issues.
This project provides a training environment for the simulation of a robot moving in a pedestrian-filled space.
The project interfaces with Faram Foundations "Gymnasium" (former OpenAI Gym) to facilitate trainings with various SOTA reinforcement learning algorithms like e.g. StableBaselines3. For simulating the pedestrians, the SocialForce model is used via a dependency on a fork of PySocialForce.
Following video outlines some training results where a robot with e-scooter kinematics is driving at the campus of University of Augsburg using real map data from OpenStreetMap.
- About
- Development and Intallation
- Prerequisites
- Quick Start
- Examples Catalog
- Development Setup
- Artifact Outputs
- Alternative Installation Methods
- System Dependencies
- Tests
- 5. Run Visual Debugging of Pre-Trained Demo Models
- 6. Run StableBaselines Training (Docker)
- 7. Edit Maps
- 8. Optimize Training Hyperparams (Docker)
- 9. Extension: Pedestrian as Adversarial-Agent
- π Documentation
Refer to the development guide for contribution guidelines, code standards, and templates.
This project now uses uv for modern Python dependency management and virtual environment handling.
Install Python 3.10+ (Python 3.12 is recommended) and uv:
# Install uv (the modern Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
# or
pip install uv# Clone the repository with submodules
git clone --recurse-submodules https://github.com/ll7/robot_sf_ll7
cd robot_sf_ll7
# Install all dependencies and create virtual environment automatically
uv sync
# Activate the virtual environment
source .venv/bin/activate
# Install system dependencies (Linux/Ubuntu)
sudo apt-get update && sudo apt-get install -y ffmpegConsult the curated examples/README.md for quickstart,
advanced, benchmark, and plotting workflows. Each entry lists prerequisites,
expected outputs, and CI status.
For development work with additional tools:
# Install with development dependencies
uv sync --extra dev
# Install pre-commit hooks
uv run pre-commit install
# Run tests (unified suite: robot_sf + fast-pysf)
uv run pytest # β 893 tests total
# Run only robot_sf tests
uv run pytest tests # β 881 tests
# Run only fast-pysf tests
uv run pytest fast-pysf/tests # β 12 tests
# Run linting and formatting
uv run ruff check .
uv run ruff format .- Generated artifacts are routed into the canonical
output/tree (output/coverage/,output/benchmarks/,output/recordings/,output/wandb/,output/tmp/). - After pulling new changes, run
uv run python scripts/tools/migrate_artifacts.py(oruv run robot-sf-migrate-artifacts) to relocate any legacyresults/,recordings/, orhtmlcov/directories. - Use
uv run python scripts/tools/check_artifact_root.pyto verify the repository root stays clean; CI runs the same guard. - Set
ROBOT_SF_ARTIFACT_ROOT=/path/to/custom/outputbefore invoking scripts if you need to direct artifacts elsewhereβthe helpers and guard respect the override. - Coverage HTML is available via
uv run python scripts/coverage/open_coverage_report.py, which opensoutput/coverage/htmlcov/index.htmlcross-platform.
If you prefer more control over the installation:
# Create virtual environment with specific Python version
uv venv --python 3.12
# Activate environment
source .venv/bin/activate
# Install project in editable mode
uv sync
# Install development tools (optional)
uv sync --group=devFor containerized environments:
docker compose build && docker compose run \
robotsf-cuda python ./scripts/training_ppo.pyNote: See GPU setup documentation for Docker with GPU support.
FFMPEG (required for video recording):
# Ubuntu/Debian
sudo apt-get install -y ffmpeg
# macOS
brew install ffmpeg
# Windows
# Download from https://ffmpeg.org/download.htmlThe project uses a unified test suite that runs both robot_sf and fast-pysf tests via a single command:
# Run all tests (recommended)
uv run pytest # β 893 tests (881 robot_sf + 12 fast-pysf)
# Run only robot_sf tests
uv run pytest tests # β 881 tests
# Run only fast-pysf tests
uv run pytest fast-pysf/tests # β 12 tests
# Run with parallel execution (faster)
uv run pytest -n autoAll tests should pass successfully. The test suite includes:
- robot_sf tests (881): Unit, integration, baselines, benchmarks
- fast-pysf tests (12): Force calculations, map loading, simulator functionality
Note: The fast-pysf tests are now integrated into the main pytest configuration and no longer require running from the fast-pysf/ directory or using python -m pytest.
# Lint and format
uv run ruff check --fix . && uv run ruff format .
# Run all tests (unified suite)
uv run pytest
# Legacy linter (for comparison)
pylint robot_sfpytest test_pygameuv run python examples/advanced/10_offensive_policy.py
uv run python examples/advanced/09_defensive_policy.py
# Classic interactions deterministic PPO visualization (Feature 128)
uv run python examples/_archived/classic_interactions_pygame.pydocker compose build && docker compose run \
robotsf-cuda python ./scripts/training_ppo.pyNote: See this setup to install Docker with GPU support.
Older versions use
docker-composeinstead ofdocker compose.
The preferred way to create maps: SVG Editor
docker-compose build && docker-compose run \
robotsf-cuda python ./scripts/hparam_opt.pyThe pedestrian is an adversarial agent who tries to find weak points in the vehicle's policy.
The Environment is built according to gymnasium rules, so that multiple RL algorithms can be used to train the pedestrian.
It is important to know that the pedestrian always spawns near the robot.
uv run python examples/advanced/06_pedestrian_env_factory.pyComprehensive documentation is available in the docs/ directory.
For detailed guides on setup, development, benchmarking, and architecture, visit the Documentation Index.
- Environment Refactoring - NEW: Comprehensive guide to the refactored environment architecture
- Data Analysis - Analysis tools and utilities
- GPU Setup - GPU configuration for training
- Map Editor Usage - Creating and editing simulation maps
- SVG Map Editor - SVG-based map creation
- Simulation View - Visualization and rendering
- UV Migration - Migration to UV package manager
- Artifact Policy Quickstart - Migration workflow, guard usage, and override instructions for the canonical
output/tree - Contributing Agents Guide β Repository structure, coding style, test workflow, and contributor conventions (start here if new!)
The project has been refactored to provide a consistent, extensible environment system:
# New factory pattern for environment creation
from robot_sf.gym_env.environment_factory import (
make_robot_env,
make_image_robot_env,
make_pedestrian_env
)
# Clean, consistent interface
robot_env = make_robot_env(debug=True)
image_env = make_image_robot_env(debug=True)
ped_env = make_pedestrian_env(robot_model=model, debug=True)Key Benefits:
- β 50% reduction in code duplication
- β Consistent interface across all environment types
- β Easy extensibility for new environment types
- β Backward compatibility maintained
π Read the full refactoring documentation β
Tools for recomputing, optimizing, and analyzing Social Navigation Quality Index (SNQI) weights are now available:
- User Guide:
docs/snqi-weight-tools/README.md - Design & architecture:
docs/dev/issues/snqi-recomputation/DESIGN.md - Headless usage (minimal deps): see Headless mode

