Enterprise-grade UI testing platform that enables teams to validate web application functionality through natural language instructions, automated browser interactions, and intelligent test result reporting.
UITestFlow makes UI testing accessible and powerful by accepting test instructions in natural language or structured formats, executing them with Playwright-powered browser automation, and generating beautiful, actionable test reports with screenshots and recordings.
- Natural Language Tests: Write tests in plain English - no coding required
- Structured Tests: Or use YAML/JSON for precise control
- Beautiful Reports: Screenshots, videos, and detailed execution logs
- CI/CD Ready: Seamless integration with GitHub Actions, GitLab CI, Jenkins
- Zero Lock-in: Open source with self-hosted deployment option
- Multi-Format Test Input: Natural language, YAML definitions, or JSON schemas
- Intelligent Test Execution: LLM-powered interpretation of test instructions via Claude MCP
- Browser Automation: Playwright-based headless/headed browser control (Chromium, Firefox, WebKit)
- Visual Documentation: Screenshots at key points, full session recordings
- Multiple Interfaces: CLI, TUI (Terminal UI), Web UI, REST API
- Profile Management: Multiple environment configurations (dev/staging/prod)
- Authentication Support: Form auth, OAuth, JWT, custom flows
Can you test the dashboard filter functionality?
Login to http://35.85.221.158:8080 with admin/admin
Navigate to any dashboard with filters
Apply a filter and verify the charts update correctly
version: "1.0"
name: "Login and Dashboard Test"
url: "http://35.85.221.158:8080/login/"
auth:
type: "form"
username: "admin"
password: "admin"
steps:
- name: "Navigate to login page"
action: "goto"
url: "{{ base_url }}/login/"
- name: "Enter credentials"
action: "fill"
selectors:
- selector: 'input[name="username"]'
value: "{{ auth.username }}"
- selector: 'input[name="password"]'
value: "{{ auth.password }}"
- name: "Click login button"
action: "click"
selector: 'button[type="submit"]'
- name: "Verify dashboard loads"
action: "wait_for"
selector: '.dashboard-content'
timeout: 10000
evaluators:
- name: "all_steps_completed"
type: "execution_successful"
- name: "no_console_errors"
type: "console_errors"
max_errors: 0
- name: "page_performance"
type: "performance"
max_load_time: 5000# Install from PyPI (coming soon)
pip install uitestflow
# Or install from source
git clone https://github.com/preset-io/ui-test-flow.git
cd ui-test-flow
pip install -e .- Create a profile (
.uitest_profiles.yaml):
default: local
profiles:
local:
name: "Local Development"
base_url: "http://localhost:8080"
auth:
type: "form"
username: "admin"
password: "admin"
browser:
headless: false- Create a test file (
tests/login_test.yaml):
version: "1.0"
name: "Simple Login Test"
steps:
- name: "Navigate to login"
action: "goto"
url: "{{ base_url }}/login/"
- name: "Fill username"
action: "fill"
selector: 'input[name="username"]'
value: "{{ auth.username }}"
- name: "Fill password"
action: "fill"
selector: 'input[name="password"]'
value: "{{ auth.password }}"
- name: "Click submit"
action: "click"
selector: 'button[type="submit"]'
evaluators:
- name: "all_steps_completed"
type: "execution_successful"- Run the test:
uitestflow run tests/login_test.yaml --profile localUITestFlow integrates with Anthropic's Model Context Protocol (MCP) to enable intelligent test generation and execution powered by Claude.
- Natural Language Test Generation: Describe your test in plain English, get a complete YAML test
- Adaptive Selectors: Claude suggests alternative selectors when elements change
- Intelligent Error Recovery: Automatic recovery from common test failures
- Real-time Guidance: Step-by-step validation and suggestions during execution
1. Set up your Anthropic API key:
export ANTHROPIC_API_KEY="your-api-key-here"Get your API key from: https://console.anthropic.com/settings/keys
2. Enable MCP features:
# Copy example configuration
cp .env.mcp.example .env
# Edit .env and set your API key
UITESTFLOW_MCP_ENABLED=true
ANTHROPIC_API_KEY=your-api-key-here3. Generate a test from natural language:
uitestflow generate "Login to example.com with admin/password123, verify dashboard loads" \
--output login_test.yamlThis creates a complete test file with all necessary steps, selectors, and evaluators.
4. Run with Claude guidance:
uitestflow run login_test.yaml --with-claudeClaude provides real-time guidance, adapts selectors, and recovers from errors automatically.
Example 1: Simple Login Test Generation
uitestflow generate "Navigate to /login, fill username and password, click submit, verify dashboard" \
--base-url "http://example.com" \
--output tests/login.yamlExample 2: Complex Form with Validation
uitestflow generate "Fill registration form with firstName, lastName, email, password, accept terms, submit, verify success message appears" \
--model claude-3-5-sonnet-20241022 \
--output tests/registration.yamlExample 3: E2E Shopping Flow
uitestflow generate "Search for 'laptop', add first result to cart, proceed to checkout, fill shipping info, complete order" \
--model claude-3-5-sonnet-20241022 \
--output tests/checkout.yamlExample 4: Guided Execution with Error Recovery
# Run test with Claude guidance enabled
uitestflow run tests/complex_flow.yaml --with-claude
# Result: Claude adapts selectors and recovers from errors automatically
# - Original selector fails → Claude suggests alternatives
# - Unexpected redirect → Claude adapts the flow
# - Element not ready → Claude adds appropriate waitControl MCP behavior in your .env file:
# Feature toggles
UITESTFLOW_MCP_ENABLED=true
UITESTFLOW_MCP_GENERATION_ENABLED=true
UITESTFLOW_MCP_GUIDED_EXECUTION_ENABLED=true
# Claude model selection
# Options: claude-3-5-haiku-20241022 (fast), claude-3-5-sonnet-20241022 (balanced)
UITESTFLOW_MCP_CLAUDE_MODEL=claude-3-5-haiku-20241022
UITESTFLOW_MCP_CLAUDE_TEMPERATURE=0.3
# Guidance settings
UITESTFLOW_MCP_GUIDANCE_DETAIL_LEVEL=standard # minimal, standard, verbose
UITESTFLOW_MCP_GUIDANCE_MAX_RETRIES=3
# Features
UITESTFLOW_MCP_SELECTOR_ADAPTATION_ENABLED=true
UITESTFLOW_MCP_ERROR_RECOVERY_ENABLED=true- MCP User Guide - Complete guide to MCP features
- MCP API Reference - REST API and WebSocket documentation
- MCP Examples - Comprehensive code examples
MCP uses the Anthropic API which has usage-based pricing:
- Test Generation: ~$0.001-0.01 per test (depending on complexity)
- Guided Execution: ~$0.005-0.02 per test run (depending on steps)
- Haiku Model: Most cost-effective for simple to moderate tests
- Sonnet Model: Better for complex workflows, slightly higher cost
Tips to minimize costs:
- Use Haiku (default) for most tests
- Enable response caching:
UITESTFLOW_MCP_CACHE_RESPONSES=true - Use minimal guidance level for simple tests
- Disable guidance for tests that don't need it
UITestFlow provides three interfaces to accommodate different workflows:
Beautiful browser-based interface for visual test management and real-time monitoring.
# Start the web server (development mode with auto-reload)
uitestflow serve --reload
# Or using make
make serve
# Production mode with multiple workers
make serve-prodAccess the Web UI at:
- Dashboard: http://localhost:8080
- API Docs: http://localhost:8080/docs
Features:
- Visual test creation and editing
- Real-time test execution monitoring
- Screenshot gallery and video playback
- Shareable test reports
- Team collaboration
- Analytics and trends
- AI Test Generator - Generate tests from natural language descriptions
- Live Execution View - Real-time step progress with screenshots
- Claude Guidance Panel - View AI suggestions and adaptations
MCP-Powered Features (with Anthropic API key):
-
AI Test Generator (
/generateroute):- Describe your test in plain English
- Select Claude model (Haiku for speed, Sonnet for complex tests)
- Adjust temperature for creativity vs. consistency
- View generated test in JSON/YAML/Preview
- Save and run tests directly from the UI
-
Real-Time Execution View (
/tests/:id/executeroute):- Live progress tracking with step-by-step status
- Screenshot gallery updated in real-time via WebSocket
- Console logs and error messages
- Execution metrics (duration, progress %)
- Connection status indicator
-
Claude Guidance Panel:
- Step-by-step AI guidance and suggestions
- Selector adaptation history (original → adapted)
- Error recovery actions taken
- Expandable/collapsible per step
- Copy suggestions to clipboard
Quick UI Tour:
- Dashboard (
/) - Test overview and recent runs - AI Generate (
/generate) - Generate tests with Claude - Tests (
/library) - Browse and manage test library - Run Test (
/runner) - Execute tests manually - Results (
/results) - View test execution history
See UI Guide for detailed documentation.
Interactive terminal dashboard for local development (Coming Soon).
# Launch the TUI
uitestflow tui
# Or using make
make tuiPlanned Features:
- Interactive test browser
- Real-time execution monitoring
- Vim-style navigation
- Screenshot preview
- Profile management
Perfect for automation, CI/CD pipelines, and scripting.
# Run a specific test file
uitestflow run tests/dashboard_filters.yaml --profile staging
# Run all tests matching a pattern
uitestflow run tests/ --pattern "login_*" --profile production
# Run from URL (e.g., GitHub PR)
uitestflow run-from-url "https://github.com/apache/superset/pull/35152" \
--base-url "http://35.85.221.158:8080" \
--auth-user admin \
--auth-pass admin╭─── Test Execution ───╮
│ Profile: staging │
│ Browser: chromium │
│ Headless: true │
╰──────────────────────╯
Running: Login and Navigate to Dashboard
✓ Navigate to login page (1.2s)
📸 Screenshot: login_page.png
✓ Enter credentials (0.3s)
✓ Click login button (0.5s)
📸 Screenshot: after_login.png
✓ Verify dashboard loads (2.1s)
📸 Screenshot: dashboard.png
╭─── Evaluators ───╮
│ ✓ all_steps_completed │
│ ✓ no_console_errors │
│ ✓ page_load_time < 5s │
╰───────────────────────╯
Result: PASSED ✓
Duration: 4.1s
Screenshots: 3
Report: https://uitestflow.preset.io/runs/12345
# List available profiles
uitestflow profiles list
# Test a profile's connectivity
uitestflow profiles test staging
# Initialize configuration
uitestflow config init
# Validate configuration
uitestflow config validate
# Show version information
uitestflow versionUITestFlow requires PostgreSQL for the web server. You have two options:
Option 1: Use Local PostgreSQL (Recommended for Development)
# Setup database on your local PostgreSQL installation
make setup-local-db
# Start the server
uitestflow serve --reloadSee LOCAL_POSTGRES_SETUP.md for detailed instructions.
Option 2: Use Docker
# Start just the database services
make docker-up-db
# Or start all services (database, Redis, API server, workers)
docker-compose up -d
# View logs
docker-compose logs -f app
# Stop services
docker-compose downNote: The CLI tool works standalone without a database. The web UI requires PostgreSQL.
default: local
profiles:
local:
name: "Local Development"
base_url: "http://localhost:8080"
auth:
type: "form"
username: "${LOCAL_USERNAME}"
password: "${LOCAL_PASSWORD}"
browser:
headless: false
slowmo: 500 # Slow down for visibility
staging:
name: "Staging Environment"
base_url: "https://staging.preset.io"
auth:
type: "oauth"
provider: "okta"
client_id: "${OKTA_CLIENT_ID}"
client_secret: "${OKTA_CLIENT_SECRET}"
browser:
headless: true
production:
name: "Production Environment"
base_url: "https://app.preset.io"
auth:
type: "oauth"
provider: "okta"
client_id: "${OKTA_PROD_CLIENT_ID}"
client_secret: "${OKTA_PROD_CLIENT_SECRET}"
browser:
headless: true
notifications:
on_failure:
- slack: "#alerts"
- email: "oncall@preset.io"browser:
type: "chromium" # chromium, firefox, webkit
headless: true
viewport:
width: 1920
height: 1080
device: "Desktop" # or "iPhone 12", "Pixel 5"
locale: "en-US"
timezone: "America/Los_Angeles"name: UI Tests
on: [pull_request]
jobs:
ui-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install UITestFlow
run: pip install uitestflow
- name: Run UI Tests
env:
UITEST_API_KEY: ${{ secrets.UITEST_API_KEY }}
run: |
uitestflow run tests/ --profile staging --format junit --output results.xml
- name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: results.xmlui_tests:
stage: test
image: python:3.11
script:
- pip install uitestflow
- uitestflow run tests/ --profile staging --format junit --output results.xml
artifacts:
reports:
junit: results.xmlUITestFlow was built with Apache Superset and Preset workflows in mind. Here's how Preset engineers use it:
A PR claims to fix dashboard filter functionality. Instead of manual testing:
-
PR Description includes test instructions:
Testing instructions: 1. Login to http://35.85.221.158:8080/ (admin/admin) 2. Navigate to any dashboard with filters 3. Apply a filter from the dropdown 4. Verify charts update correctly 5. Verify URL parameters reflect filter state -
CI automatically runs the test:
uitestflow run-from-pr ${{ github.event.pull_request.number }} \ --profile pr-testing \ --base-url http://35.85.221.158:8080 -
Results posted to PR:
🤖 UI Test Results ✅ Test Passed (4.2s) Steps: ✓ Login successful (1.2s) ✓ Dashboard loaded (0.8s) ✓ Filter applied (1.1s) ✓ Charts updated correctly (1.1s) 📊 View full report: https://uitestflow.preset.io/runs/abc-123 📸 Screenshots: 6 captured 🎥 Video: Available
See examples/basic/dashboard_filters.yaml for a complete example.
- Getting Started Guide - Installation and first test walkthrough
- UI Guide - Complete guide to Web UI, TUI, and CLI
- Writing Tests - Comprehensive test authoring guide
- YAML Reference - Complete YAML schema documentation
- Evaluators - Built-in evaluators and custom evaluators
- CI/CD Integration - Integration guides for all major CI platforms
- API Documentation - REST API reference
- CLI Reference - Complete CLI command documentation
- React UI Setup - React development setup and component docs
Explore complete test examples in the examples/ directory:
- Basic Login Test - Simple form authentication
- Dashboard Filters - Superset dashboard testing
- E2E User Flows - Multi-step user journeys
- Visual Regression - Screenshot comparison
- Performance Testing - Load time validation
Ready-to-run example tests demonstrating MCP features (examples/tests/):
- login_flow.yaml - Login flow with validation
- form_submission.yaml - Multi-field form testing
- navigation_flow.yaml - Multi-page navigation
- search_workflow.yaml - Search and result validation
- e2e_shopping.yaml - Complete e2e shopping cart flow
Run all example tests:
# Run all MCP example tests with report generation
./scripts/run_example_tests.sh --open-report
# Or run individual tests
uitestflow run examples/tests/login_flow.yaml
uitestflow run examples/tests/e2e_shopping.yaml --with-claudeThe script will:
- Execute all 5 example tests sequentially
- Generate screenshots and logs for each test
- Create an HTML report with results summary
- Automatically open the report in your browser (with
--open-report)
We welcome contributions! Please see our Contributing Guide for details on:
- Development setup
- Code style guidelines (Black, Ruff, type hints)
- Testing requirements (pytest, coverage)
- PR process and commit conventions
- Code review guidelines
# Clone the repository
git clone https://github.com/preset-io/ui-test-flow.git
cd uitestflow
# Set up development environment (creates venv, installs dependencies, pre-commit hooks)
make setup
# Or manually:
python -m venv .venv
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
pip install -e ".[dev]"
pre-commit install
# Run tests
pytest
# or
make test
# Run linters and formatters
make lint
make format
# Start services with Docker
make docker-up
# View all available commands
make helpUITestFlow is built with a modular architecture:
- Core Engine: Test parsing, execution, and result generation
- Browser Automation: Playwright integration with MCP support
- LLM Integration: Claude via Model Context Protocol for natural language interpretation
- CLI/TUI: Typer + Rich + Textual for beautiful terminal interfaces
- Web UI (coming soon): React + TypeScript + Tailwind CSS
- API Server (coming soon): FastAPI + PostgreSQL + Celery
- Phase 1 (Current): CLI tool with basic YAML test execution
- Phase 2: LLM-powered natural language tests
- Phase 3: Web UI and REST API
- Phase 4: Advanced CI/CD integration and GitHub App
- Phase 5: Enterprise features (RBAC, audit logging, SSO)
- Phase 6: Visual regression, accessibility testing, mobile support
UITestFlow is licensed under the Apache License 2.0.
The core testing engine, CLI, and basic features are fully open source. Enterprise features (RBAC, SAML/SSO, audit logging) are available in the commercial version.
- Community Support: GitHub Discussions
- Bug Reports: GitHub Issues
- Commercial Support: Contact support@preset.io
UITestFlow is built on the shoulders of giants:
- Playwright - Modern browser automation
- Anthropic Claude - LLM-powered test interpretation
- Model Context Protocol - Standardized LLM integration
- MCP Playwright - MCP server for Playwright
Built with love by the team at Preset for the Apache Superset community and beyond.