Skip to content

casoon/auditmysite

Repository files navigation

auditmysite

Accessibility audits for real rendered pages, built for CI and modern frontend stacks

CI Release Rust License

Overview

auditmysite is a Rust CLI that audits accessibility against fully rendered pages in Chrome. Instead of scanning raw HTML only, it uses Chrome DevTools Protocol (CDP) and the browser's native Accessibility Tree, so it can evaluate dynamic DOM, computed styles, and JavaScript-heavy applications more realistically.

It is designed for teams that want a fast local check, stable JSON for automation, and a single binary that can be dropped into CI.

Why use it

  • Real browser signals instead of static guesses
  • Works for single pages, sitemaps, URL lists, and same-domain crawl discovery
  • Outputs as terminal table, JSON, PDF, AI-optimized task list, or compact summary JSON for dashboards
  • JSON output is schema-backed and tested for release stability
  • Ships as a Rust binary instead of a Node-based toolchain

Why this approach

Most accessibility CLIs either depend on static parsing or require a heavier runtime stack around browser automation. auditmysite is opinionated in a different direction:

  • Chrome-native accessibility data first
  • CLI-first workflow for local use and CI
  • Small operational surface: install a binary, point it at a URL, get a report
  • Optional modules for performance, SEO, security, and mobile without changing tools

Quick Example

auditmysite https://example.com

By default, a single URL audit runs the full analysis set, prints a compact terminal summary, and writes report artifacts into the current working directory:

  • ./example-com-YYYY-MM-DD-single-report.pdf
  • ./example-com-YYYY-MM-DD-single-report.json
  • ./example-com-history.json

For CI or machine-readable output:

auditmysite https://example.com -f json -o report.json --quiet

Install

curl installer (macOS/Linux)

curl -fsSL https://raw.githubusercontent.com/casoon/auditmysite/main/install.sh | bash

The installer downloads the latest GitHub Release asset for your platform and verifies it against the published .sha256 checksum before installing it.

Upgrading: run the same command again. The installer detects where your current binary lives and replaces it in place — no PATH conflicts, no leftover old version.

Note: If you previously installed via cargo install auditmysite, remove that binary first so the script installs to the right location:

rm ~/.cargo/bin/auditmysite
curl -fsSL https://raw.githubusercontent.com/casoon/auditmysite/main/install.sh | bash

Verify the installation:

auditmysite --version
auditmysite --help
auditmysite https://example.com

That default command writes report artifacts into the current directory, for example:

  • ./example-com-YYYY-MM-DD-single-report.pdf
  • ./example-com-YYYY-MM-DD-single-report.json
  • ./example-com-history.json

cargo install (crates.io)

cargo install auditmysite

Requires Rust 1.75+. Builds and installs the binary from source.

Prebuilt binaries

Download from Releases.

  • macOS/Linux: .tar.gz
  • Windows: .zip

Build from source

git clone https://github.com/casoon/auditmysite.git
cd auditmysite
cargo build --release
./target/release/auditmysite --version

Requirements

  • Rust 1.75+ for local builds
  • Chrome/Chromium or a managed browser install
  • macOS, Linux, or Windows for released binaries

If no compatible browser is installed:

auditmysite browser detect
auditmysite browser install

Quick Start

The fastest way to validate your setup:

auditmysite https://example.com

That creates the default report set in the current directory. For machine-readable output only:

auditmysite https://example.com -f json -o report.json

Single page

# default: full audit + terminal summary + PDF/JSON/history in current directory
auditmysite https://example.com

# JSON
auditmysite https://example.com -f json -o report.json

# PDF with explicit path
auditmysite https://example.com -f pdf -o report.pdf

# stricter WCAG level
auditmysite https://example.com -l AAA

Batch audits

# explicit sitemap
auditmysite --sitemap https://example.com/sitemap.xml

# crawl from a base URL and discover same-domain pages automatically
auditmysite https://example.com --crawl --crawl-depth 2

# base URL: probe robots.txt / common sitemap locations first
auditmysite https://example.com

# prefer sitemap automatically if one is found
auditmysite https://example.com --prefer-sitemap

# suppress sitemap suggestion and stay on the single page
auditmysite https://example.com --no-sitemap-suggest

# URL file
auditmysite --url-file urls.txt

# per-page reports: scan a list/sitemap but write one PDF per URL instead of an aggregated batch report
auditmysite --url-file urls.txt --per-page-reports --output reports/per-page/
auditmysite --sitemap https://example.com/sitemap.xml --per-page-reports --output reports/per-page/

Browser selection

auditmysite --browser-path /path/to/chrome https://example.com

CLI

auditmysite [OPTIONS] [URL] [COMMAND]

Primary commands:

  • auditmysite <url>: run a full single-page audit and write PDF/JSON/history into the current directory
  • auditmysite --sitemap <url>: audit sitemap URLs
  • auditmysite --url-file <file>: audit URLs from file
  • auditmysite <url> --crawl: discover same-domain pages from a seed URL and audit them as a batch
  • auditmysite browser detect: show available browsers
  • auditmysite browser install: install managed Chrome for Testing
  • auditmysite doctor: run local diagnostics

Useful flags:

  • --prefer-sitemap: if a sitemap is detected for a base URL, switch directly into batch mode
  • --no-sitemap-suggest: suppress sitemap probing/suggestion and keep the run on the single URL
  • --crawl-depth <n>: limit same-domain crawl discovery depth when using --crawl
  • --per-page-reports: scan a URL list or sitemap but write one individual report per URL instead of an aggregated batch report; -o is treated as a target directory
  • --lang <de|en>: set the language for PDF reports (default: de)
  • --stack: enable tech stack detection and stack-specific security probes (included automatically with --full)

For the full current interface, use:

auditmysite --help
auditmysite browser --help

Output Contract

JSON output is treated as an automation contract.

The repository validates these contracts in automated tests.

Feature Scope

WCAG rules (Level A and AA)

Core rules:

  • Non-text content (1.1.1)
  • Keyboard access (2.1.1)
  • Bypass blocks (2.4.1)
  • Language of page (3.1.1)
  • Name, role, value / form labeling (4.1.2)
  • Contrast minimum (1.4.3) and non-text contrast (1.4.11)
  • Headings and labels (2.4.6)
  • Labels or instructions (3.3.2)
  • Focus order (2.4.3) and focus visible (2.4.7)
  • Label in name (2.5.3)

ARIA and semantics:

  • ARIA role validation — invalid roles, required owned elements, required context
  • ARIA attribute checks — allowed attributes per role, required attributes, prohibited attributes
  • Accessible name checks — icon-only controls, empty aria-labelledby/describedby, name/description conflicts, naming by role type (command, input, meter, progressbar, toggle, dialog, treeitem)
  • ARIA relationship checks — aria-controls, aria-owns, aria-activedescendant, duplicate IDs
  • Landmark structure — main, navigation, banner, contentinfo (presence, uniqueness, top-level nesting, no-duplicate for banner/contentinfo/main, required parent for landmarks)
  • Content in landmarks — region rule ensuring body content lives inside landmark regions
  • Table rules — caption/name, header cells, presentational tables, cell placement
  • Form rules — fieldset/legend for grouped controls, required field indication, error description, label-title-only detection
  • List structure — listitem context, empty lists, definition list integrity
  • Dialog rules — accessible name, aria-modal, alert region labeling
  • Widget rules — tab/tabpanel pairing, selected state, combobox options, slider value, tree context, summary element naming
  • Media rules — application and image-role elements without accessible names
  • SVG rules — SVG image accessible names
  • Server-side image maps — detection and flagging
  • Meta viewport — large maximum-scale restrictions

77 rules with stable rule_id, tags (e.g. wcag2a, wcag412, cat.aria), and an impact field (critical / serious / moderate / minor).

Some criteria (keyboard trap behavior, timed content, captions) cannot be reliably verified by automated means. These are flagged as not_testable in the JSON output and listed in the report's audit scope section as requiring manual review.

AAA is not fully implemented yet.

Additional modules

Modules are classified as measured (based on real browser data) or heuristic (structural-signal estimates, marked with ~ in reports).

Measured:

  • Performance: Core Web Vitals (FCP, LCP, TBT, CLS) and technical complexity (DOM size, render blocking, resource loading)
  • SEO: meta tags, headings, structured data, content profile, tracking/external services signals
  • Security: HTTPS, header checks, and CDN/WAF protection detection
  • Mobile: viewport, touch-target, readability checks, UX heuristics (cookie-banner, modal/overlay, CTA detection)

Heuristic (indicator scores — tendency, not measurements):

  • UX: 5-dimension analysis (CTA clarity, visual hierarchy, content clarity, trust signals, cognitive load) with saturation curve scoring
  • Journey: user-flow analysis (entry clarity, orientation, navigation, interaction, conversion) with page-intent-aware weighting
  • AI Visibility: structural readiness for LLM indexing and citation (readability, citability, structured data, AI policy, chunk quality)
  • Source Quality: code hygiene signals (inline styles, deprecated elements, semantic structure, asset hygiene)
  • Dark Mode: detects dark mode support via prefers-color-scheme media queries and CSS custom properties
  • Tech Stack: detects CMS and frameworks (WordPress, Drupal, Joomla, Next.js, Astro, React, Vue, etc.) via in-page signals and runs stack-specific security probes (admin panel exposure, user enumeration, version disclosure)

Risk assessment

Risk level is computed independently from the score. A page scoring 81 can still carry "Critical" risk if it has Level A violations relevant under BFSG/EAA. Risk levels: Low, Medium, High, Critical — based on critical/high violations, legal flags, and blocking issues (4.1.2/2.1.1).

Rule configuration

Rules can be selectively disabled or filtered via auditmysite.toml:

[rules]
disabled = ["heading-order", "landmark-one-main"]
# enabled_only = ["image-alt", "label"]  # run only these rules

AI / LLM output format

Export findings as a task-oriented JSON list for direct LLM processing:

auditmysite https://example.com -f ai -o findings.json

Each entry is a task object with task_id, rule_id, impact, wcag, tags, title, issue, fix, selector, node_id, and help_url — sorted by impact severity. Suitable for direct use as context in AI-assisted code remediation.

Baseline and CI diff

Save a baseline snapshot and compare future runs against it:

# Save baseline
auditmysite https://example.com -f json -o baseline.json

# Future CI runs can diff against the baseline programmatically via the Rust API

The Baseline type in the audit module supports from_violations, diff, load, and save.

Report Modes

Single-page reports and sitemap/batch reports are intentionally different.

Single-page report is structured in two layers:

  • Top (decision layer): hero block with score + risk level, top 3 problems, next 3 steps, overall assessment (UX/Accessibility, Technik/Sicherheit, SEO), trend
  • Bottom (implementation layer): task block ("Was jetzt tun?" with role, effort, impact, priority), module overview, key findings, technical implementation details, detailed metrics

Sitemap/batch report is aggregated and domain-wide: averages, ranking, recurring issues, URL matrix, near-duplicate content, broken links, crawl diagnostics.

Batch reports are not a stack of single-page reports.

Compared to typical setups

  • Better fit for JavaScript-heavy sites than static HTML-only checks
  • Easier to distribute than a multi-package browser toolchain
  • More automation-friendly than ad hoc console output because the JSON contract is explicit and tested
  • Broader reporting surface than a pure accessibility-only checker when you also want performance, SEO, security, and mobile signals
  • Violations carry stable rule_id, tags, and impact — easier to integrate with existing tooling or dashboards

Typical Workflows

Examples grouped by audience and goal.

Customer-facing report (PDF)

Single-URL audit with full module coverage and a custom logo on the cover.

# default: writes a PDF + JSON sidecar to the current directory
auditmysite https://example.com --full

# explicit branding and output path
auditmysite https://example.com --full --logo ./assets/customer-logo.svg --output reports/customer.pdf

# pick a report depth: executive (management), standard (default), technical (developers)
auditmysite https://example.com --full --report-level executive --output reports/exec.pdf

# PDF language (default: de)
auditmysite https://example.com --full --lang en --output reports/report-en.pdf

CI / automation (JSON)

Quiet, machine-readable output for pipelines.

# exit code follows score thresholds; JSON report for downstream tooling
auditmysite https://example.com -f json -o report.json --quiet

# batch CI run on a sitemap
auditmysite --sitemap https://example.com/sitemap.xml -f json -o sitemap-report.json --quiet

AI fix list

Compact, agent-friendly output that focuses on actionable fixes.

auditmysite https://example.com -f ai -o fixes.json

Dashboard / ranking feed

Compact summary JSON with score, grade, medal, issue counts, and top 10 findings — matches the lastAudit schema used by dashboard tools.

auditmysite https://example.com -f summary -o summary.json

Sitemap / batch

Domain-wide audits with cross-page aggregation.

# explicit sitemap
auditmysite --sitemap https://example.com/sitemap.xml --full

# crawl from a base URL
auditmysite https://example.com --crawl --crawl-depth 2 --max-pages 50 --full

# URL list from file
auditmysite --url-file urls.txt --full

# one PDF per URL instead of an aggregated batch report
auditmysite --sitemap https://example.com/sitemap.xml --per-page-reports --output reports/per-page/

Local development

# audit a local dev server with a system Chrome
auditmysite https://localhost:3000 --browser-path /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome

# quick CLI summary without writing files
auditmysite https://example.com --format table

Base URL with sitemap suggestion

# interactive: ask first if a sitemap is found
auditmysite https://example.com

# non-interactive: switch directly to sitemap mode
auditmysite https://example.com --prefer-sitemap

# stay on the single URL even when a sitemap exists
auditmysite https://example.com --no-sitemap-suggest

Architecture

CLI -> Browser Manager -> Chrome/CDP -> Accessibility Tree -> WCAG Engine -> Output

Key layers:

  • browser/: browser detection, resolution, install, lifecycle, pooling
  • audit/: pipeline, normalization, scoring, batch processing
  • wcag/: rule engine and violations
  • output/: CLI, JSON, PDF, AI, summary format
  • seo/, security/, performance/, mobile/, ux/, journey/: optional analysis modules
  • tech_stack/, source_quality/, ai_visibility/, dark_mode/: heuristic indicator modules

More detail:

Development

Setup

git clone https://github.com/casoon/auditmysite.git
cd auditmysite
cargo test
cargo build --release
./target/release/auditmysite https://example.com

Pre-commit checks

This repository uses Git hooks with a fast local pre-commit gate and a full pre-push gate.

pre-commit runs:

  • nosecrets on staged changes
  • cargo fmt -- --check
  • cargo clippy --lib --bins --all-features -- -D warnings

pre-push runs:

  • scripts/check-version-match.sh for pushed v* tags
  • cargo clippy --all-targets --all-features -- -D warnings
  • cargo test

Enable the repo hook path:

git config core.hooksPath .githooks

Install nosecrets as a real binary first:

npm install -g @casoon/nosecrets
# or
cargo install nosecrets-cli

Skip the Rust checks only when you intentionally need to bypass them:

SKIP_RUST_CHECKS=1 git commit -m "..."

The hook expects nosecrets to be available in PATH.

Release checks

Run the local release gate with:

./scripts/release-check.sh

It validates:

  • cargo test
  • ignored browser integration tests
  • builds with and without PDF
  • current --help output
  • JSON contract tests
  • installer/release artifact consistency
  • stale docs references

Troubleshooting

  • Browser not found: run auditmysite browser detect or install a managed browser with auditmysite browser install
  • Running in Docker or as root: use --no-sandbox
  • Need raw output for scripts: prefer -f json -o report.json
  • Unsure about the full CLI surface: run auditmysite --help

Contributing

Library / Development

For library development or local work from the repository:

cargo build
cargo test

If you want the current local repository state as an installed binary while developing:

cargo install --path . --force

Contributions are welcome. At minimum before opening a PR:

cargo test
./scripts/release-check.sh

License

AGPL-3.0-or-later. See LICENSE.

Credits

About

AuditMySite CLI - Lightning-fast WCAG accessibility auditing written in Rust. Real browser testing with Chrome DevTools Protocol, automated compliance checking, and detailed reports.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages