Local AI Agent Orchestrator - Ultra-fast, privacy-focused agent workflow execution built in Rust.
RustForge is a high-performance orchestration engine for AI agent workflows. Define multi-agent workflows in YAML, execute them locally with full control over your data, and leverage powerful features like variable interpolation, checkpointing, and real-time event streaming.
Current Status: Phase 5 - UI Layer ✅
Phase 5 adds a modern web UI built with Svelte 5, featuring visual workflow builder, real-time execution monitoring, and comprehensive workflow management.
- YAML-based Workflow Definitions - Simple, declarative workflow syntax
- Sequential Execution Engine - Reliable step-by-step agent orchestration
- Variable Interpolation - Dynamic context passing between agents with
{agent_id.output}syntax - State Persistence - Embedded redb database for execution history and checkpoints
- Event Bus - Real-time workflow execution events
- Flexible Configuration - Multi-layer config system (defaults → user → project → env vars)
- CLI Interface - Intuitive commands for workflow management
- Ollama Integration - Local LLM support for privacy-focused execution
- OpenAI Fallback - Automatic cloud fallback when local LLM unavailable
- Agent System - BaseAgent with LLM provider integration
- Memory Store - Conversation history persistence with redb
- Real Agent Execution - Workflows now execute with actual LLM calls
- Thread-Safe Registries - Concurrent agent and LLM provider management
- 6 Built-in Tools - FileSystem, WebScraper, PDF Parser, Shell Executor, API Client, Clipboard
- Tool Registry - Thread-safe tool management and execution
- Permission System - Allow/Deny/Prompt policies for tool execution
- Process Isolation - Sandboxed subprocess execution with timeout enforcement
- Audit Logging - Security event tracking for compliance
- Path & Command Validation - Prevent unauthorized access and dangerous operations
- REST API - Full HTTP API with Axum for workflow and execution management
- WebSocket Support - Real-time execution event streaming with bidirectional control
- Parallel Execution - Concurrent agent execution with tokio::spawn
- Merge Strategies - Concat, Vote, and LLM-based result merging
- Timeout & Cancellation - Graceful shutdown with CancellationToken
- Unified Executor - Single executor supporting sequential, parallel, and DAG modes
- Integration Tests - Comprehensive end-to-end workflow testing
- Modern Web UI - Built with Svelte 5 and Technical Blueprint aesthetic
- Visual Workflow Builder - Drag-and-drop canvas for designing workflows
- Real-time Monitoring - Live execution tracking with WebSocket updates
- Execution History - Browse and review past workflow runs
- Settings Management - Configure API endpoints and preferences
- Responsive Design - Clean, intuitive interface with status indicators
- Rust 1.70+ and Cargo
- Git
- Ollama (optional, for local LLM) - Install Ollama
- OpenAI API Key (optional, for cloud fallback)
# Clone the repository
git clone https://github.com/apus3404-oss/RustForge.git
cd RustForge
# Build release binary
cargo build --release
# Binary will be at target/release/rustforge
./target/release/rustforge --versionOption 1: Ollama (Local, Recommended)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama2Option 2: OpenAI (Cloud Fallback)
# Set API key
export OPENAI_API_KEY="your-api-key"# Linux/macOS
export PATH="$PATH:$(pwd)/target/release"
# Or copy to system location
sudo cp target/release/rustforge /usr/local/bin/# Build and start the API server
cargo build --release
./target/release/rustforge serve --port 3000The backend will be available at http://localhost:3000
# Navigate to UI directory
cd ui
# Install dependencies
npm install
# Start development server
npm run devThe UI will be available at http://localhost:5173
Note: The UI is also served by the backend at http://localhost:3000 when you run rustforge serve.
rustforge initThis creates:
.rustforge/- Configuration and state databaseworkflows/- Directory for workflow definitionsworkflows/example.yaml- Example workflow to get started
Create workflows/hello.yaml:
name: "Hello Workflow"
mode: sequential
agents:
- id: greeter
type: base
task: "Say hello to the user"
- id: responder
type: base
task: "Respond to: {greeter.output}"rustforge run workflows/hello.yamlWith inputs:
rustforge run workflows/hello.yaml --inputs '{"user": "Alice"}'Initialize a new RustForge project.
rustforge init # Initialize in current directory
rustforge init ./my-project # Initialize in specific directoryExecute a workflow.
rustforge run workflows/my-workflow.yaml
rustforge run workflows/my-workflow.yaml --inputs '{"key": "value"}'
rustforge run workflows/my-workflow.yaml --resume # Resume from checkpointOptions:
-i, --inputs <JSON>- Provide workflow inputs as JSON string-r, --resume- Resume execution from last checkpoint
Validate a workflow definition without executing it.
rustforge validate workflows/my-workflow.yamlChecks for:
- Valid YAML syntax
- Required fields (name, mode, agents)
- Unique agent IDs
- Valid dependency references
- Circular dependency detection
List all available workflows in the workflows/ directory.
rustforge listManage configuration.
rustforge config show # Show full config
rustforge config get execution.max_parallel_agents # Get specific value
rustforge config set execution.max_parallel_agents 20 # Set valueCommon config keys:
execution.max_parallel_agents- Max concurrent agents (default: 10)execution.default_timeout- Timeout in seconds (default: 300)llm.default_provider- Default LLM provider (default: "ollama:llama3")logging.level- Log level: debug, info, warn, error (default: "info")ui.port- UI server port (default: 3000)
name: "Workflow Name"
mode: sequential # Execution mode (sequential only in Phase 1)
agents:
- id: agent1 # Unique identifier
type: base # Agent type (currently 'base', more types in Phase 3)
task: "Task description"
- id: agent2
type: base
task: "Use output from agent1: {agent1.output}"Reference outputs from previous agents or workflow inputs:
agents:
- id: analyzer
type: base
task: "Analyze: {input.document}"
- id: summarizer
type: base
task: "Summarize: {analyzer.output}"Variables are resolved at runtime using the execution context.
Execute multiple agents concurrently:
name: "Parallel Analysis"
mode: parallel # Agents run concurrently
agents:
- id: analyzer1
type: base
task: "Analyze dataset A"
- id: analyzer2
type: base
task: "Analyze dataset B"
- id: analyzer3
type: base
task: "Analyze dataset C"Results can be merged using different strategies:
- concat: Concatenate all results with newlines
- vote: Return most common result
- llm_merge: Intelligently synthesize results using LLM
RustForge provides a full REST API for workflow management and execution.
rustforge serve --port 8080Workflow Management:
POST /api/workflows- Create workflowGET /api/workflows- List workflowsGET /api/workflows/:id- Get workflow detailsDELETE /api/workflows/:id- Delete workflow
Execution:
POST /api/workflows/:id/execute- Execute workflowGET /api/executions/:id- Get execution statusGET /api/executions- List executionsPOST /api/executions/:id/pause- Pause executionPOST /api/executions/:id/resume- Resume executionDELETE /api/executions/:id- Cancel execution
WebSocket:
WS /api/ws/executions/:id- Real-time execution events
# Create workflow
curl -X POST http://localhost:8080/api/workflows \
-H "Content-Type: application/json" \
-d '{
"name": "API Test",
"mode": "parallel",
"agents": [
{"id": "agent1", "type": "base", "task": "Task 1"},
{"id": "agent2", "type": "base", "task": "Task 2"}
]
}'
# Execute workflow
curl -X POST http://localhost:8080/api/workflows/{id}/execute
# Get execution status
curl http://localhost:8080/api/executions/{execution_id}Connect to WebSocket for real-time execution updates:
const ws = new WebSocket('ws://localhost:8080/api/ws/executions/{id}');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Event:', data);
// Events: TaskStarted, TaskCompleted, TaskFailed
};
// Send control commands
ws.send(JSON.stringify({ type: 'pause' }));
ws.send(JSON.stringify({ type: 'resume' }));
ws.send(JSON.stringify({ type: 'cancel' }));RustForge includes a modern web interface for visual workflow management and monitoring.
The UI is automatically served by the backend:
rustforge serve --port 3000Then open http://localhost:3000 in your browser.
Home Dashboard
- Quick access to all major features
- Action cards for creating workflows, viewing executions, and settings
Visual Workflow Builder
- Drag-and-drop agent nodes on canvas
- Visual connection editor for dependencies
- Real-time workflow validation
- Support for Sequential, Parallel, and DAG modes
- Export workflows as YAML
Execution Monitor
- Real-time event timeline with WebSocket updates
- Live status tracking (Running, Completed, Failed, Paused)
- Execution controls (Pause, Resume, Cancel)
- Detailed event logs with agent outputs and errors
- Visual status indicators with color coding
Execution History
- Browse past workflow executions
- Filter by status and workflow
- View detailed execution timelines
- Error messages for failed runs
Settings
- Configure API base URL
- Set WebSocket endpoint
- Manage UI preferences
For UI development and customization:
cd ui
npm install
npm run dev # Development server at http://localhost:5173
npm run build # Production build
npm run preview # Preview production buildSee docs/ui-guide.md for detailed UI documentation, features, and troubleshooting.
Variables are resolved at runtime using the execution context.
RustForge uses a layered architecture:
┌─────────────────────────────────────┐
│ CLI Layer (clap) │ User interface
├─────────────────────────────────────┤
│ Config Layer (TOML/YAML) │ Configuration management
├─────────────────────────────────────┤
│ Storage Layer (redb) │ State persistence
├─────────────────────────────────────┤
│ Orchestration Engine (tokio) │ Workflow execution
│ • Parser • Validator │
│ • Executor • Interpolator │
│ • Event Bus • Checkpoints │
└─────────────────────────────────────┘
Key Components:
- Config Layer - Multi-source configuration with priority: env vars → project config → user config → defaults
- Storage Layer - Embedded redb database for execution state, checkpoints, and audit logs
- Engine Layer - Workflow parsing, validation, variable interpolation, and sequential execution
- Event Bus - Real-time event streaming for monitoring and UI integration
For detailed architecture documentation, see docs/specs/design.md.
Configuration is loaded from multiple sources with the following priority:
-
Environment Variables (highest priority)
RUSTFORGE_DEFAULT_LLMRUSTFORGE_MAX_PARALLEL_AGENTSRUSTFORGE_LOG_LEVEL
-
Project Config -
.rustforge/config.tomlin current directory -
User Config -
~/.rustforge/config.toml -
Defaults (lowest priority)
[llm]
default_provider = "ollama:llama3"
fallback_enabled = true
[llm.providers.ollama]
base_url = "http://localhost:11434"
default_model = "llama3"
timeout_secs = 300
[execution]
max_parallel_agents = 10
default_timeout = 300
enable_checkpoints = true
checkpoint_interval = 60
[permissions]
default_policy = "prompt" # allow, deny, or prompt
audit_log_enabled = true
[logging]
level = "info"
format = "pretty" # json, pretty, or compactcargo testRUSTFORGE_LOG_LEVEL=debug cargo run -- run workflows/example.yamlrustforge/
├── src/
│ ├── cli/ # CLI commands and handlers
│ ├── config/ # Configuration types and loader
│ ├── engine/ # Workflow execution engine
│ ├── storage/ # State persistence layer
│ ├── error.rs # Error types
│ └── main.rs # Entry point
├── tests/
│ ├── integration/ # Integration tests
│ └── fixtures/ # Test workflows
├── workflows/ # User workflow definitions
└── .rustforge/ # Config and state database
-
✅ Phase 1: Core Foundation (Completed)
- Config management, CLI, storage, workflow engine
-
✅ Phase 2: LLM & Agent Layer (Completed)
- Real AI agent implementations with BaseAgent
- LLM provider integrations (Ollama, OpenAI)
- Memory store for conversation history
- Thread-safe registries for agents and providers
-
✅ Phase 3: Tool & Security Layer (Completed)
- 6 built-in tools (FileSystem, WebScraper, PDF, Shell, API, Clipboard)
- Tool registry and execution framework
- Permission system with allow/deny/prompt policies
- Process isolation for secure subprocess execution
- Audit logging for security events
- Path and command validation
-
✅ Phase 4: API & Execution Patterns (Completed)
- REST API for workflow execution
- Parallel execution mode
- WebSocket support for real-time events
- Merge strategies and timeout/cancellation
-
✅ Phase 5: UI Layer (Completed)
- Modern web UI with Svelte 5
- Visual workflow builder
- Real-time execution monitoring
- Execution history and settings management
-
📋 Phase 6+: Advanced Features (Future)
- Plugin system for custom tools and agents
- Additional LLM providers (Anthropic, Google, etc.)
- Distributed execution across multiple nodes
- Workflow templates library
- Advanced debugging and profiling tools
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
MIT License - see LICENSE file for details.
- Repository: https://github.com/apus3404-oss/RustForge
- Documentation:
docs/getting-started.md - UI Guide:
docs/ui-guide.md - Design Specs:
docs/specs/design.md