Most projects show code health as a row of independent badges: CI, coverage, security, complexity, debt. Each one reports its own pass/fail. None of them say which signal matters more or how to weigh them against each other.
confvis aggregates them into a single weighted score. Declare the factors you care about, give each a weight and threshold, and get one composite. The scoring rule lives in your repo as config, reviewable like any other code.
Outputs include gauge badges, flat badges, sparkline history, HTML dashboards, and GitHub PR checks. Use --fail-under to gate CI on a minimum score, or --fail-on-regression to catch drift against a stored baseline.
One weighted assessment:
Instead of interpreting all of these independently:
- uses: boinger/confvis@v1
with:
config: confidence.json
output: badge.svgSee GitHub Action Documentation for all options.
go install github.com/boinger/confvis/cmd/confvis@latestOr build from source:
git clone https://github.com/boinger/confvis.git
cd confvis
go build -o confvis ./cmd/confvisVerify your installation:
confvis --versionconfvis pulls metrics from tools you already use:
# Fetch coverage from Codecov
export CODECOV_TOKEN=your_token
confvis fetch codecov -p owner/repo -o coverage.json
# Fetch code quality from SonarQube (self-hosted or SaaS)
export SONARQUBE_URL=https://sonar.example.com
export SONARQUBE_TOKEN=squ_xxx
confvis fetch sonarqube -p myproject -o quality.json
# Aggregate with weights and generate badge + dashboard
confvis aggregate -c coverage.json:60 -c quality.json:40 -o ./outputOther integrations: GitHub Actions, Snyk, Trivy. See Sources.
Each fetched report contains:
{
"title": "Code Coverage",
"score": 87,
"threshold": 80,
"factors": [
{"name": "Line Coverage", "score": 89, "weight": 70},
{"name": "Branch Coverage", "score": 82, "weight": 30}
]
}- score: The metric value (0-100), auto-calculated from weighted factors
- threshold: Minimum acceptable score; badge shows pass/fail status
- factors: Breakdown of contributing metrics with weights
The aggregate command (from Step 1) combines multiple reports into a weighted overall score. See Schema Reference for the full specification.
Custom metrics? Create your own JSON/YAML for metrics confvis doesn't fetch directly. Or write a new module (and send me the PR, please)!
Create a .confvis.yaml to set defaults and avoid repetitive flags:
gauge:
style: github
fail_under: 80
badge_type: gauge
sources:
sonarqube:
url: https://sonar.example.com
snyk:
org: my-org-idConfig is loaded from .confvis.yaml in the current directory or ~/.config/confvis/. Precedence: config < environment < flags.
See CLI Reference for full documentation.
Use confvis gate for CI-only pass/fail gating (no badge generation needed), or confvis gauge when you also want badge output. Both commands are available through the GitHub Action via command: gate or command: gauge:
# CI gate: fail the build if score drops below 75
confvis gate -c confidence.json --fail-under 75
# Save baseline on main branch (stored in git ref, no files needed)
confvis baseline save -c confidence.json
# CI gate: fail on regression from stored baseline
confvis gate -c confidence.json --fail-on-regression --compare-baseline
# Or use gauge when you also need a badge
confvis gauge -c confidence.json --compare-baseline --fail-on-regression -o badge.svg
# Quiet mode for clean CI logs (exit code only)
confvis gate -c confidence.json --fail-under 75 -qIn GitHub Actions, gate automatically writes gate_result=pass|fail and gate_score=<N> to $GITHUB_OUTPUT, making results available to downstream steps without continue-on-error. The GitHub Action also exposes these as action outputs (gate_result, gate_score) plus generic score/passed mappings.
Supports stdin/stdout for pipeline workflows:
# Pipe from another tool
metrics-tool export | confvis gauge -c - -o badge.svg
# Write directly to stdout
confvis gauge -c confidence.json -o - > badge.svgconfvis can fetch metrics directly from external systems:
# Fetch from SonarQube (code quality)
export SONARQUBE_URL=https://sonar.example.com
export SONARQUBE_TOKEN=squ_xxx
confvis fetch sonarqube -p myproject -o confidence.json
# Fetch from Codecov (coverage)
export CODECOV_TOKEN=xxx
confvis fetch codecov -p myorg/myrepo -o confidence.json
# Fetch from GitHub Actions (CI/CD)
export GITHUB_TOKEN=xxx
confvis fetch github-actions -p myorg/myrepo -o confidence.json
# Fetch from Snyk (security)
export SNYK_TOKEN=xxx
confvis fetch snyk --org my-org-id -p my-project-id -o confidence.json
# Fetch from Trivy (local security scan)
confvis fetch trivy -p . -o security.json
# Pipe directly to badge generation
confvis fetch sonarqube -p myproject -o - | confvis gauge -c - -o badge.svgSee Sources Documentation for details on available sources and their configuration.
Fetch metrics from an external source.
confvis fetch <source> -p <project> -o <output> [source-specific-flags]Supported sources: codecov, dependabot, github-actions, grype, semgrep, snyk, sonarqube, trivy
Generate both an SVG badge and HTML dashboard.
confvis generate -c confidence.json -o ./output [--dark]Creates:
output/badge.svg- SVG gauge badgeoutput/dashboard/index.html- Interactive HTML dashboard
Generate a gauge badge in various formats.
confvis gauge -c confidence.json -o badge.svg [--format svg|json|text|markdown|github-comment] [--badge-type gauge|flat] [--style github|minimal|corporate|high-contrast] [--dark]Output formats (default: text for stdout, svg for files):
svg: SVG gauge badge imagejson: Score metadata as JSONtext: Just the score number (for scripting)markdown: Markdown table for PR commentsgithub-comment: GitHub-flavored markdown with emoji and collapsible sections
Badge types:
gauge(default): Semi-circular gaugeflat: Shields.io-compatible rectangular badge (supports--iconfor SVG path data)sparkline: Trend line showing score history (use--history-autoto persist automatically)
Example sparkline (this repo's score trend):
Color styles: github (default), minimal, corporate, high-contrast
CI gate: check thresholds and exit non-zero on failure. No badge generation; pass/fail only.
# Fail if score below threshold
confvis gate -c confidence.json --fail-under 85
# Fail on regression from baseline
confvis gate -c confidence.json --fail-on-regression --compare-baseline
# Combined: threshold + regression + per-factor
confvis gate -c confidence.json --fail-under 75 \
--fail-on-regression --compare-baseline \
--factor-threshold "Coverage:80"At least one threshold flag is required (--fail-under, --fail-on-regression, or --factor-threshold). Use -q for exit-code-only output, or -v for per-factor breakdown.
Aggregate multiple reports into a single dashboard with weighted scores.
# Aggregate multiple reports
confvis aggregate -c api/confidence.json -c web/confidence.json -o ./output
# With custom weights
confvis aggregate -c api/confidence.json:60 -c web/confidence.json:40 -o ./output
# Using glob patterns (monorepo)
confvis aggregate -c "services/*/confidence.json" -o ./outputCreates:
output/badge.svg- Aggregate SVG gauge badgeoutput/dashboard/index.html- Multi-report dashboard with all componentsoutput/<report-title>.svg- Individual badges for each report
Use --fragment to generate an embeddable HTML fragment (no DOCTYPE wrapper) for Confluence or other systems.
See examples/dashboard for a working example with embedding instructions.
Manage baselines for regression detection in CI/CD.
# Save current score as baseline (stored in git ref by default)
confvis baseline save -c confidence.json
# Show current baseline
confvis baseline show
# Save to file instead of git ref
confvis baseline save -c confidence.json --file baseline.jsonUse --compare-baseline with confvis gauge to automatically fetch and compare against the stored baseline.
Create check runs on CI platforms directly from confidence reports.
# Auto-detect from GitHub Actions environment
confvis check github -c confidence.json
# Explicit options
confvis check github -c confidence.json \
--owner myorg --repo myrepo --sha abc123 \
--token $GITHUB_TOKEN
# Custom check name
confvis check github -c confidence.json --name "Code Quality"In GitHub Actions, most options are auto-detected from environment variables. Requires checks: write permission.
| Field | Type | Required | Description |
|---|---|---|---|
title |
string | Yes | Report title |
score |
int | No* | Overall score (0-100), auto-calculated if omitted |
threshold |
int | Yes | Minimum passing score |
description |
string | No | Report description |
thresholds |
object | No | Custom color thresholds (greenAbove, yellowAbove) |
factors |
array | No | Breakdown of contributing factors |
*Score is auto-calculated as a weighted average when omitted and factors are present.
Each factor:
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Factor name |
score |
int | Yes | Factor score (0-100) |
weight |
int | Yes | Weight in overall calculation |
description |
string | No | Factor description |
url |
string | No | Link to detailed report (clickable in dashboard) |
- GitHub Action
- Installation Guide
- CLI Reference
- Schema Reference
- Integration Guide
- External Sources
- Architecture
See the examples/ directory for:
- GitHub Actions workflow
- Makefile integration
- Multi-source score aggregation
MIT - see LICENSE