Skip to content

ll7/robot_sf_ll7

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

robot-sf

2025-09-16 Under Development. See https://github.com/ll7/robot_sf_ll7/issues.

About

This project provides a training environment for the simulation of a robot moving in a pedestrian-filled space.

The project interfaces with Faram Foundations "Gymnasium" (former OpenAI Gym) to facilitate trainings with various SOTA reinforcement learning algorithms like e.g. StableBaselines3. For simulating the pedestrians, the SocialForce model is used via a dependency on a fork of PySocialForce.

Following video outlines some training results where a robot with e-scooter kinematics is driving at the campus of University of Augsburg using real map data from OpenStreetMap.

Development and Intallation

Refer to the development guide for contribution guidelines, code standards, and templates.

This project now uses uv for modern Python dependency management and virtual environment handling.

Prerequisites

Install Python 3.10+ (Python 3.12 is recommended) and uv:

# Install uv (the modern Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
# or
pip install uv

Quick Start

# Clone the repository with submodules
git clone --recurse-submodules https://github.com/ll7/robot_sf_ll7
cd robot_sf_ll7

# Install all dependencies and create virtual environment automatically
uv sync

# Activate the virtual environment
source .venv/bin/activate

# Install system dependencies (Linux/Ubuntu)
sudo apt-get update && sudo apt-get install -y ffmpeg

Examples Catalog

Consult the curated examples/README.md for quickstart, advanced, benchmark, and plotting workflows. Each entry lists prerequisites, expected outputs, and CI status.

Development Setup

For development work with additional tools:

# Install with development dependencies
uv sync --extra dev

# Install pre-commit hooks
uv run pre-commit install

# Run tests (unified suite: robot_sf + fast-pysf)
uv run pytest  # β†’ 893 tests total

# Run only robot_sf tests
uv run pytest tests  # β†’ 881 tests

# Run only fast-pysf tests
uv run pytest fast-pysf/tests  # β†’ 12 tests

# Run linting and formatting
uv run ruff check .
uv run ruff format .

Artifact Outputs

  • Generated artifacts are routed into the canonical output/ tree (output/coverage/, output/benchmarks/, output/recordings/, output/wandb/, output/tmp/).
  • After pulling new changes, run uv run python scripts/tools/migrate_artifacts.py (or uv run robot-sf-migrate-artifacts) to relocate any legacy results/, recordings/, or htmlcov/ directories.
  • Use uv run python scripts/tools/check_artifact_root.py to verify the repository root stays clean; CI runs the same guard.
  • Set ROBOT_SF_ARTIFACT_ROOT=/path/to/custom/output before invoking scripts if you need to direct artifacts elsewhereβ€”the helpers and guard respect the override.
  • Coverage HTML is available via uv run python scripts/coverage/open_coverage_report.py, which opens output/coverage/htmlcov/index.html cross-platform.

Alternative Installation Methods

Manual dependency installation

If you prefer more control over the installation:

# Create virtual environment with specific Python version
uv venv --python 3.12

# Activate environment
source .venv/bin/activate

# Install project in editable mode
uv sync

# Install development tools (optional)
uv sync --group=dev

Docker Installation (Advanced)

For containerized environments:

docker compose build && docker compose run \
    robotsf-cuda python ./scripts/training_ppo.py

Note: See GPU setup documentation for Docker with GPU support.

System Dependencies

FFMPEG (required for video recording):

# Ubuntu/Debian
sudo apt-get install -y ffmpeg

# macOS
brew install ffmpeg

# Windows
# Download from https://ffmpeg.org/download.html

Tests

Unified Test Suite

The project uses a unified test suite that runs both robot_sf and fast-pysf tests via a single command:

# Run all tests (recommended)
uv run pytest  # β†’ 893 tests (881 robot_sf + 12 fast-pysf)

# Run only robot_sf tests
uv run pytest tests  # β†’ 881 tests

# Run only fast-pysf tests  
uv run pytest fast-pysf/tests  # β†’ 12 tests

# Run with parallel execution (faster)
uv run pytest -n auto

All tests should pass successfully. The test suite includes:

  • robot_sf tests (881): Unit, integration, baselines, benchmarks
  • fast-pysf tests (12): Force calculations, map loading, simulator functionality

Note: The fast-pysf tests are now integrated into the main pytest configuration and no longer require running from the fast-pysf/ directory or using python -m pytest.

Run Linter / Tests

# Lint and format
uv run ruff check --fix . && uv run ruff format .

# Run all tests (unified suite)
uv run pytest

# Legacy linter (for comparison)
pylint robot_sf

GUI Tests

pytest test_pygame

5. Run Visual Debugging of Pre-Trained Demo Models

uv run python examples/advanced/10_offensive_policy.py
uv run python examples/advanced/09_defensive_policy.py
# Classic interactions deterministic PPO visualization (Feature 128)
uv run python examples/_archived/classic_interactions_pygame.py

Visualization

6. Run StableBaselines Training (Docker)

docker compose build && docker compose run \
    robotsf-cuda python ./scripts/training_ppo.py

Note: See this setup to install Docker with GPU support.

Older versions use docker-compose instead of docker compose.

7. Edit Maps

The preferred way to create maps: SVG Editor

8. Optimize Training Hyperparams (Docker)

docker-compose build && docker-compose run \
    robotsf-cuda python ./scripts/hparam_opt.py

9. Extension: Pedestrian as Adversarial-Agent

The pedestrian is an adversarial agent who tries to find weak points in the vehicle's policy.

The Environment is built according to gymnasium rules, so that multiple RL algorithms can be used to train the pedestrian.

It is important to know that the pedestrian always spawns near the robot.

demo_ped

uv run python examples/advanced/06_pedestrian_env_factory.py

Visualization

πŸ“š Documentation

Comprehensive documentation is available in the docs/ directory.

For detailed guides on setup, development, benchmarking, and architecture, visit the Documentation Index.

Core Documentation

Environment Architecture (New!)

The project has been refactored to provide a consistent, extensible environment system:

# New factory pattern for environment creation
from robot_sf.gym_env.environment_factory import (
    make_robot_env,
    make_image_robot_env, 
    make_pedestrian_env
)

# Clean, consistent interface
robot_env = make_robot_env(debug=True)
image_env = make_image_robot_env(debug=True)
ped_env = make_pedestrian_env(robot_model=model, debug=True)

Key Benefits:

  • βœ… 50% reduction in code duplication
  • βœ… Consistent interface across all environment types
  • βœ… Easy extensibility for new environment types
  • βœ… Backward compatibility maintained

πŸ“– Read the full refactoring documentation β†’

SNQI Weight Tooling (Benchmark Metrics)

Tools for recomputing, optimizing, and analyzing Social Navigation Quality Index (SNQI) weights are now available:

About

robot_sf in the ll7 namespace.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 12