Releases: six2dez/burp-ai-agent
Releases · six2dez/burp-ai-agent
v0.5.0
Full Changelog: v0.4...v0.5.0
v0.4
Full Changelog: v0.3.0...v0.4
[0.4.0] - 2026-03-06
- Copilot CLI Backend:
- New GitHub Copilot CLI backend with non-interactive prompt mode (
-p), quiet output (--quiet), and large prompt file-based fallback for payloads exceeding 32k chars. - Configurable command in AI Backend settings tab; registered via ServiceLoader for drop-in availability.
- New GitHub Copilot CLI backend with non-interactive prompt mode (
- AI Request Logger:
- Real-time activity logger (
AiRequestLogger) capturing all AI interactions: prompts, responses, MCP tool calls, retries, errors, and scanner dispatches. - Trace ID correlation across chat (
chat-turn-{UUID}), scanner (scanner-job-{UUID}), and agent (agent-turn-{UUID}) flows for end-to-end observability. - Structured
AiActivityEntrywith timestamp, activity type, source, backend, duration, character counts, token usage, and arbitrary metadata. - Integration in
AgentSupervisor(prompt/response/error),PassiveAiScanner(send/timeout/error/completion),McpToolHandlers(per-tool call with policy decisions and arg/result hashes), andChatPanel(tool chain steps).
- Real-time activity logger (
- AI Logger UI Tab:
- New "AI Logger" tab in bottom settings panel with live filterable table, detail inspector pane, and JSON export.
- Preset filters (Errors only, Slow >=3s, Tool failures), type/source dropdowns, and trace ID search for quick diagnosis.
- Rolling JSONL Persistence:
- Optional file-based persistence for the AI Request Logger with configurable rotation via JVM system properties (
burp.ai.logger.rolling.enabled,.dir,.maxBytes,.maxFiles).
- Optional file-based persistence for the AI Request Logger with configurable rotation via JVM system properties (
- Auto Tool Chaining:
- Chat automatically chains up to 8 sequential MCP tool calls per interaction when the AI response contains a tool call JSON payload.
- All chained calls share the same trace ID for end-to-end correlation in the AI Logger.
- ToolCallParser:
- Robust JSON tool call extraction from AI responses supporting fenced code blocks (
json/tool), bare JSON objects, and nested OpenAI-styletool_calls/function_callformats.
- Robust JSON tool call extraction from AI responses supporting fenced code blocks (
- System Prompt Support:
AgentConnection.send()now accepts asystemPromptparameter; HTTP backends (Ollama, LM Studio, OpenAI-compatible) receive agent profile instructions via the system role instead of inlining them in user prompts.
- Per-Session Token Tracking:
- Chat sessions track cumulative input/output token counts with visual token bars showing session-level and global usage in the sidebar.
- Context Collection Size Cap:
ContextCollectorcaps total serialized size of context items to prevent oversized payloads from exceeding prompt limits.
- Backend Retry Diagnostics:
BackendDiagnostics.RetryEventmodel with structured metadata (attempt number, delay, reason) logged to the AI Request Logger asRETRYactivities.
v0.3.0
[0.3.0] - 2026-02-24
Added
- Security Test Coverage (MCP):
- Added unit tests for bearer token authorization and constant-time comparison in
KtorMcpServerManager. - Added unit tests for loopback TLS connection hardening behavior in
McpSupervisor.
- Added unit tests for bearer token authorization and constant-time comparison in
- Backend Registry Test Coverage:
- Added tests for availability cache behavior and cache reset on reload/shutdown.
- Scanner/Issue Utilities Test Coverage:
- Added tests for shared issue canonicalization, equivalent-issue detection, and HTML detail formatting.
- Added passive scanner confidence-threshold test to ensure AI findings below 85% confidence are skipped.
- Redaction Lifecycle Test Coverage:
- Added tests for per-salt and global host mapping cleanup.
- Shared Issue Utilities:
- New
IssueUtilshelper for canonical issue naming, equivalent issue detection, and safe issue detail HTML formatting.
- New
- Redaction Cleanup API:
- Added
Redaction.clearMappings(salt: String? = null)to support deterministic cleanup of anonymization mappings.
- Added
- Token Optimization Controls (Passive + Context):
- Added persistent passive scanner controls for endpoint dedup TTL, response-fingerprint dedup TTL, prompt-cache TTL, and cache sizes.
- Added persistent passive scanner controls for request/response body prompt caps, maximum header count, and maximum parameter count.
- Added persistent manual-context controls for request/response body truncation and compact JSON serialization.
- Passive Scanner Prompt Result Cache:
- Added prompt-hash result caching with TTL-aware reuse and cache-hit audit events to avoid repeated backend calls for identical payloads.
- Token Usage Telemetry:
- Added shared
TokenTrackerflow accounting (input/output chars + token estimate) for chat and passive scanning paths.
- Added shared
- Active Scanner Queue Panel:
- Added a dedicated queue viewer dialog with live refresh, per-item cancellation, and full queue clearing controls.
- Added queue snapshot APIs and selective cancellation support for queued active scan targets.
- Backend Health Contract and Diagnostics UX:
- Added
HealthCheckResultcontract (Healthy,Degraded,Unavailable,Unknown) at backend level. - Added backend-level health check integration in registry/supervisor flows.
- Added "Test connection" actions in backend settings panels.
- Added
- HTTP Backend Runtime Telemetry:
- Added usage-aware connection support so HTTP backends can report real token usage when providers expose
usagefields.
- Added usage-aware connection support so HTTP backends can report real token usage when providers expose
- Testing Expansion (Integration + Concurrency + Resilience):
- Added MCP server integration tests (
McpServerIntegrationTest) covering health and auth/shutdown endpoints. - Added MCP limiter concurrency stress tests (
McpRequestLimiterConcurrencyTest). - Added active scanner queue backpressure tests (
ScannerQueueBackpressureTest). - Added supervisor auto-restart policy tests (
AgentSupervisorRestartPolicyTest). - Added backend health contract tests (
BackendHealthCheckTest) and settings migration tests (AgentSettingsMigrationTest).
- Added MCP server integration tests (
- CI Workflows for Reliability:
- Added
nightlyRegressionTestGradle task for heavy suites (integration/concurrency/resilience). - Added
.github/workflows/nightly-regression.ymlwith scheduled/manual execution and artifact publishing.
- Added
- Settings Schema Migration and Operator Docs:
- Added schema version marker
settings.schema.versionwith additive/idempotent migration flow. - Added operator runbooks:
docs/mcp-hardening.md,docs/ui-safety-guide.md,docs/backend-troubleshooting.md.
- Added schema version marker
Changed
- Duplicate Issue Logic Consolidation:
- Replaced duplicated issue matching/canonicalization code in Passive Scanner, Active Scanner, MCP tools, and UI actions with
IssueUtils.
- Replaced duplicated issue matching/canonicalization code in Passive Scanner, Active Scanner, MCP tools, and UI actions with
- Shutdown Reliability and Consistency:
- Refactored
App.shutdown()to use a unified safe shutdown step wrapper with consistent error handling. - Added redaction mapping cleanup to app shutdown flow.
- Refactored
- Text Sanitization Performance:
- Cached regex patterns in
IssueTextto avoid recompilation on each call.
- Cached regex patterns in
- Passive Scanner Request Filtering and Deduplication:
- Added pre-AI traffic pruning for low-value responses (204/304, static assets, tiny bodies without interesting headers).
- Added endpoint-path and response-fingerprint dedup windows to avoid repeated analysis of equivalent traffic.
- Passive Scanner Prompt Compaction:
- Replaced full-header forwarding with security-focused header filtering (allowlist + noise denylist + custom
x-*handling). - Reduced parameter verbosity and removed cache-busting parameters from AI metadata.
- Added content-aware body compaction (JSON array sampling + HTML head/form/inline-script extraction).
- Updated passive scanner base prompt to a compact, evidence-first schema while preserving strict JSON output constraints.
- Replaced full-header forwarding with security-focused header filtering (allowlist + noise denylist + custom
- Context Collection Payload Size Control:
ContextCollectornow supports body truncation controls and compact JSON output to reduce manual action token usage.- Context menu actions now pass context size/compact settings from
AgentSettingsinstead of relying on implicit defaults.
- HTTP Backend Conversation Trimming:
- Conversation history trimming now enforces both message count and total character budget to prevent prompt blow-up in long sessions.
- BountyPrompt Context Limits:
- Reduced default tag/chunk limits and added category-specific bounds to lower prompt size while keeping actionable context.
- Passive Scanner Settings UX:
- Expanded AI Passive Scanner tab with advanced token/performance controls and live runtime application of optimization settings.
- Backend Health Status Presentation:
- Main tab backend badge now supports richer status transitions (
AI: OK,AI: Degraded,AI: Offline) with explanatory tooltips.
- Main tab backend badge now supports richer status transitions (
- Supervisor Health Flow:
- Backend health resolution now routes through backend registry health contracts with compatibility fallback to availability checks.
- HTTP Backend Client Lifecycle:
- HTTP backends now reuse shared
OkHttpClientinstances keyed by backend URL/timeout and close pools centrally on shutdown.
- HTTP backends now reuse shared
- Token Estimation Accuracy:
- Token estimates now use backend-specific calibration factors and mix real usage values with estimated remainder when available.
- CI Gate Strategy:
- PR pipeline now uses a fast verification gate (
test -PexcludeHeavyTests=true) while preserving heavy suites for nightly runs.
- PR pipeline now uses a fast verification gate (
- Architecture and README References:
- Updated architecture and README docs to include schema migration behavior and operator playbook links.
- Ollama context limit:
- Updated default Ollama Max Context Window to 256000.
Fixed
- Backend Registry Cache Lifecycle:
- Fixed
availabilityCachelifecycle by clearing it onreload()andshutdown(). - Fixed initialization-order safety so cache is always available during startup/reload.
- Fixed
- Repeated Passive AI Cost on Equivalent Traffic:
- Fixed repeated backend invocations for semantically identical passive traffic by combining endpoint/fingerprint dedup with prompt-result caching.
- Unbounded Manual Context Growth:
- Fixed manual context actions sending oversized request/response payloads and pretty-printed JSON by introducing truncation + compact encoding.
- Long-Session Prompt Inflation (HTTP Backends):
- Fixed runaway history growth by adding total-character trimming in conversation history management.
- HTTP Backend Client Churn:
- Fixed repeated per-request HTTP client construction that prevented efficient connection reuse.
- Legacy Settings Drift:
- Fixed legacy preference normalization for MCP allowed origins and old Gemini default command values during migration.
Full Changelog: v0.2.1...v0.3.0
v0.2.1
Full Changelog: v0.2.0...v0.2.1
v0.2.0
[0.2.0] - 2026-02-09
Added
- Chat UI Overhaul: ChatGPT-style message bubbles with timestamps, hover-copy, and improved streaming layout.
- Session Persistence: Chat sessions (titles, messages, usage stats) are auto-saved and restored across Burp restarts.
- Chat Export: Export any session as Markdown via context menu or shortcut.
- Keyboard Shortcuts: New session, delete session, clear chat, export chat, and toggle settings panel.
- Cancel In-Flight Requests: Cancel current AI response directly from the chat UI.
- Usage Stats Sidebar: Total messages and per-backend usage displayed in the sessions sidebar.
- Backend Availability Filtering: Backend selector only shows backends that are available on this machine.
- Cross-Platform CLI Resolution: Robust PATH discovery (login shell capture + fallbacks) and executable resolution.
- Markdown Rendering Enhancements: Headings, blockquotes, horizontal rules, links, inline code, and improved code block styling.
Changed
- Settings Panel UX: Collapsible settings panel with a compact toggle bar and improved focus styling.
- Chat History Handling: Controlled CLI history size to avoid oversized prompts while preserving context.
- MCP Tool Errors: Cleaner, action-oriented validation errors for missing tool arguments.
Fixed
- CLI Discovery Reliability: Better detection of CLI tools when Burp is launched from a GUI environment.
- Chat Session Backend Tracking: Sessions now track the last backend used rather than only the creation backend.
- UI State Safety: Prevent stuck “sending” states when session panels are missing.
- Chat Input Shortcuts: Shift+Enter now reliably inserts a new line while Enter sends.
- Chat Persistence Scope: Chat history now persists per Burp project (with one-time migration from global storage).
- Issue Detail Formatting: AI Active and Passive issues now render line breaks and indented sections reliably.
Full Changelog: v0.1.4...v0.2.0
v0.1.4
Full Changelog: v0.1.3...v0.1.4
https://github.com/six2dez/burp-ai-agent/blob/main/CHANGELOG.md
Full Changelog: v0.1.3...v0.1.4
v0.1.3
Full Changelog: v0.1.2...v0.1.3
v0.1.2
Full Changelog: v0.1.1...v0.1.2
v0.1.1
v0.1.0
Full Changelog: https://github.com/six2dez/burp-ai-agent/commits/v0.1.0
