Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions drafts/2026-05-08T133311Z.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Reply on "Show HN: Agentctl, a local control plane for coding agents"

- **HN:** https://news.ycombinator.com/item?id=48057567
- **Status:** draft (pending manual post)
- **Story:** Show HN: Agentctl, a local control plane for coding agents (id=48057567, 2 points, 0 comments at draft time, 11 hours old, repo at https://github.com/chocks/agentctl)
- **OP:** `chocks`

## The post

OP is a Show HN for `agentctl`, a Go single-binary tool that gates risky agent actions (package installs, shell exec, secret access, file writes, outbound API calls) at a local seam. Design choices the OP highlights: local-first, no HTTP server, no hosted component, everything under `~/.agentctl/`. The piece they didn't expect to use as much: a **permissive-then-tighten-then-replay** loop, where every gated decision lands in jsonl and you can re-evaluate stored sessions against a stricter policy without re-running the agent. Ships with a TUI for browsing sessions and stepping through replays. Targets Claude Code and MCP-based clients (Codex). OP frames it as "WIP and mostly a project for myself" but solicits feedback by posting it.

Discovery path: HN `/show` feed (Show HN listing).

## My reply

```
(disclosure: I work on FailProof AI: https://github.com/exospherehost/failproofai)

The replay-against-old-traces loop is the part I think ages best. We took a different cut at the same seam. Hooks in JS, allow/deny/instruct returned to the agent inline, decisions to NDJSON for inspection. No replay step today. Doing it well needs the trace to capture enough context (cwd, tool inputs, surrounding events) that a re-run produces the same verdict, which gets harder once policies branch on environment.

Two questions:

1. How does replay survive tool/SDK upgrades? A trace from last week may not replay against a tighter rule if the tool name or input shape moved.

2. Do you gate outbound calls at the process layer (proxy or dns) or at the tool-call boundary? Different blast-radius story for each.
```
Comment on lines +16 to +26
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add a language tag to the fenced block to satisfy markdown lint

Line 16 uses a plain triple-backtick fence; markdownlint MD040 expects a language identifier. Use ```text since this is literal post content.

Proposed fix
-```
+```text
 (disclosure: I work on FailProof AI: https://github.com/exospherehost/failproofai)
@@
-```
+```
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)

[warning] 16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@drafts/2026-05-08T133311Z.md` around lines 16 - 26, The fenced code block
beginning with a plain triple-backtick (the block containing the disclosure and
the two questions) lacks a language identifier and triggers markdownlint MD040;
fix it by editing that fenced block to use a language tag (use "text") so the
opening fence reads ```text and keep the closing ``` unchanged, ensuring the
literal content inside the block is preserved.


## Insight for the FailProof team

- **The replay-against-old-traces loop is a real product gap.** FailProof already writes every policy decision to `~/.failproofai/hook.log` (NDJSON), but the logs are framed as a debug surface for fail-open errors, not as a corpus to re-evaluate against tighter rules. Agentctl's framing - "permissive for a week, then tighten and replay" - reverses the iteration loop: you author policy *from observed behavior*, not from imagined risks. Worth prototyping a `failproofai replay --policy <new-policy.js>` that re-runs stored decisions against a candidate ruleset and reports the diff. The design tax is making the trace self-contained enough to replay deterministically across SDK/tool-name churn (the question I asked the OP).
- **Trace schema becomes a public contract once replay exists.** Today our hook.log schema is internal. If we add replay, the jsonl shape (field names, normalization of toolInput, version stamps) becomes a thing teams will pin on - so it needs versioning before the feature ships, not after.
- **Audience here is the high-end "I am willing to write Go to gate my agent" crowd**: the same crowd that prefers code-as-policy over YAML rules. They are likely receptive to FailProof's JS-policy story but allergic to anything that smells like SaaS. Lead with "no hosted component" framing in the README (we already have it; the agentctl OP centered it as a design feature, suggesting it lands).
- **Voice signal:** the OP's "the part I didn't expect to use as much as I do" is a great structural device for our own writing. Concrete, anti-marketing, and reads as honest because it admits a wrong prior. Worth borrowing in future blog posts about which FailProof policies actually fire vs which ones we expected to.

## Notes / findings

- MCP launch-order trap struck again: the `browser-use` MCP root client booted pointing at `http://127.0.0.1:9333` (Reddit harness port), so every MCP read primitive failed with "connect() timed out". Worked around it via the `browser-use` CLI subprocess form documented in `INSTRUCTIONS.md`. No new info here, but confirming the workaround still holds in May 2026.
- Reply form is rendered on this thread; thread is 11h old and uncomplaining. The OP's account `chocks` is active recently and the thread hasn't been flagged.
- 0 comments at draft time means the OP will see this as the first response. That asymmetrically raises the bar on tone: a flagged-shape comment here would be visibly disrespectful to a solo dev's WIP. The substantive-engagement-first form (compliment-then-question-then-disclose) was the right structure.
- Draft is ~125 words; in the safe zone (working FailProof reply was ~110, flagged was ~220). One repo URL (in disclosure), no install commands, no policy-name comma-list, no dashboard plug, no version-number talk.