Proof-of-concept demonstrating WebMCP — a proposed browser API that lets web pages expose tools to AI models through navigator.modelContext.
Three interactive demos show an AI model navigating real data, calling tools, and updating the live UI — running entirely in the browser with no backend.
| Demo | Data source | Description |
|---|---|---|
| Hospital Risk Explorer | Local JSON | 15 California hospitals — filter, compare, flag, and analyze financial and risk metrics |
| World Countries | REST Countries API | Every country — filter by region, compare metrics, explore profiles |
| Earthquake Monitor | USGS live feed | Last 30 days of global seismic activity — magnitude, depth, significance, tsunami warnings |
The WebMCP spec defines navigator.modelContext — a proposed browser API for registering tools that AI models can discover and call. A minimal polyfill (~10 lines) fills in the spec until browsers implement it natively.
Each demo implements a runtime on top of that:
1. Agentic tool loop
SSE stream → parse tool_use blocks → execute against local handlers → inject tool_result → continue. A full agent loop running entirely in the browser.
2. Reactive tool surface The available tool set is a function of app state, not a static declaration. Tools materialize and disappear as state changes. Schema enums rewrite themselves from live UI data — the model literally can't hallucinate options that aren't currently on screen.
3. Annotation-driven trust policy
readOnlyHint, destructiveHint, and idempotentHint aren't cosmetic metadata — they're a runtime execution policy. Read-only tools run immediately. Destructive tools pause and require human confirmation before proceeding.
- Tool registration — Pages declare tools with typed schemas; models discover and call them via
navigator.modelContext - Annotations —
readOnlyHint,idempotentHint,destructiveHintcommunicate tool semantics and drive UI (confirmation dialogs, badge display) - Dynamic tool registration — Tools appear/disappear based on app state; e.g. flagging a record materializes
review_flagsandclear_all_flags, removing the flag removes them - Reactive schemas — Input enums rewrite from live UI data; filtering the table changes what values the model can reference
- Page context — Live context bar shows the model's view of current page state; updates in real-time as the user navigates
- Read & write tools — Filter, compare, summarize (read-only); flag/unflag records with undo (write)
- Non-rendering tools — Some tools return data for model reasoning without updating the visible UI
- Multi-step orchestration — A single prompt chains 4+ tool calls in sequence
- Bidirectional context — UI interactions inject context into the conversation; tool calls update the UI
- Streaming — SSE for real-time response rendering with abort support
- Artifact generation — Tools can produce downloadable outputs (e.g. CSV export)
npx serve .Open http://localhost:3000, pick a demo, and select a provider in the settings panel.
All three demos support the same set of AI providers:
| Provider | Models | Auth |
|---|---|---|
| Anthropic | Claude Haiku 4.5, Claude Sonnet 4.6 | API key |
| GitHub Models | GPT-4.1, GPT-4.1 mini, GPT-5, GPT-5 mini | GitHub OAuth (free) |
| Local proxy | Claude via OAuth | node local-proxy.js |
The local proxy (local-proxy.js) forwards requests to Claude using CLAUDE_CODE_OAUTH_TOKEN — useful for development without an API key.
Single-page apps — no build step, no external dependencies.
index.html # Landing page linking to all three demos
hospital-risk-explorer/
index.html # Hospital demo — 9–11 dynamic tools, all views
hospitals.json # 15 California hospitals (local JSON dataset)
countries/
index.html # Countries demo — fetches restcountries.com at load
earthquakes/
index.html # Earthquake demo — fetches USGS live feed at load
local-proxy.js # Local Claude proxy (port 7337, uses CLAUDE_CODE_OAUTH_TOKEN)
chat.css # Shared chat panel styles
docs/
VISION.md # What this is building toward
ROADMAP.md # Phased extraction plan