Scope target-extensible IR project: multi-schema Postgres, cross-namespace FKs, Supabase foundation (TML-2459)#460
Scope target-extensible IR project: multi-schema Postgres, cross-namespace FKs, Supabase foundation (TML-2459)#460wmadden wants to merge 215 commits into
Conversation
Adds projects/target-extensible-ir/ capturing the shaping work for flipping Contract IR and Schema IR from flat data shapes to a three-layer polymorphic class hierarchy (framework interfaces -> family abstract bases -> target concrete classes), modelled on the existing OpFactoryCall pattern (ADR 195) and Mongo Schema IR. The spec covers: the JSON-canonical / class-in-memory round-trip (ADR 192/196), domain-operation layering (verifier and hydrator follow the same recipe), two structural exemplars (enums first as a refactor of an existing hacked solution, namespace second as a new first-class concept with the __unspecified__ sentinel), Mongo migration to the 3-layer split, and durable documentation deliverables (AGENTS.md rule split, typescript-patterns.md sibling section, Architecture Overview principles update, ADR drafts). The plan is a stub awaiting a principal-engineer pass to size the blast radius and decide infrastructure sequencing, Mongo timing, and documentation timing. Refs: TML-2459
…e plan Resolve PE-pass open questions inline and lift cross-namespace FK plus namespace authoring DSL into scope. Spec: - ContractSerializer SPI (deserializeContract + serializeContract); validateContract removed. - HAS-A target facade with composed SPI implementers (target.contractSerializer, target.schemaVerifier, ...). - Per-SPI family abstract bases preserved as the inheritance ramp for extension/target authors. - New FR16a/FR16b plus AC4a covering namespace authoring DSL (PSL + TS builder) and cross-namespace FKs. - AC4 restored to the FL-02 cross-namespace FK demonstration. - Sequencing constraints documented: TML-2457, TML-2463, TML-2408 must precede; TML-2464 must follow. - Open Questions section deleted; resolutions live in their proper homes. Plan: - Seven milestones M1 -> M2 -> M3 -> M4 -> M5a -> M5b -> M6 (foundation -> Mongo first consumer -> Postgres SPI shells -> enum exemplar -> namespace + DSL -> cross-namespace FK -> docs). - M5b is the FL-02 fix that unblocks the Supabase project. - Cross-project dependencies and sequencing recorded against the Linear ticket IDs.
📝 WalkthroughWalkthroughLarge multi-area change adding index-type/type-options support and validation, removing legacy OperationTypes in favor of queryOperationTypes, introducing aggregate multi-space migration planning/apply and per-space outputs, extending runtime middleware with a param-ref mutator and raw-SQL AST, unifying marker reads, and adding a CipherStash extension plus several examples and tests. Many docs, skills, and emitter/test updates accompany the behavior changes. ChangesIndex-type registry, validation, and authoring
OperationTypes removal and TypeMaps / emitter adjustments
CLI: multi-space aggregate planning/apply and UX
Runtime, middleware, param-ref mutator, raw-SQL, and marker read
CipherStash extension and integration example
Examples additions & utilities
Docs, agents, rules, and misc
Sequence Diagram(s)sequenceDiagram
autonumber
actor User as User
participant CLI as CLI (migration-apply)
participant Loader as Aggregate Loader
participant Planner as Graph-walk Planner
participant Runner as Multi-space Runner
participant DB as Database
rect rgba(200, 230, 255, 0.5)
User->>CLI: prisma-next migration apply
CLI->>Loader: buildContractSpaceAggregate(contract.json, extensionPacks, appPackages)
Loader-->>CLI: Aggregate { members, graphs }
CLI->>DB: readAllMarkers()
DB-->>CLI: markersBySpace (optional)
CLI->>Planner: plan per space (graphWalkStrategy, markers, ref?)
Planner-->>CLI: applyOrder + perSpace plans
alt No pending ops
CLI-->>User: Success (perSpace breakdown, 0 ops)
else Pending ops
CLI->>Runner: executeAcrossSpaces(applyOrder, perSpaceOptions)
Runner->>DB: apply space N operations (transactional)
DB-->>Runner: applied results per space
Runner-->>CLI: perSpaceResults
CLI-->>User: Success (applied totals, perSpace, pathDecision?)
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180 minutes Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
@prisma-next/mongo-runtime
@prisma-next/family-mongo
@prisma-next/sql-runtime
@prisma-next/family-sql
@prisma-next/extension-arktype-json
@prisma-next/middleware-telemetry
@prisma-next/mongo
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/sqlite
@prisma-next/target-mongo
@prisma-next/adapter-mongo
@prisma-next/driver-mongo
@prisma-next/contract
@prisma-next/utils
@prisma-next/config
@prisma-next/errors
@prisma-next/framework-components
@prisma-next/operations
@prisma-next/ts-render
@prisma-next/contract-authoring
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/psl-printer
@prisma-next/cli
@prisma-next/emitter
@prisma-next/migration-tools
prisma-next
@prisma-next/vite-plugin-contract-emit
@prisma-next/mongo-codec
@prisma-next/mongo-contract
@prisma-next/mongo-value
@prisma-next/mongo-contract-psl
@prisma-next/mongo-contract-ts
@prisma-next/mongo-emitter
@prisma-next/mongo-schema-ir
@prisma-next/mongo-query-ast
@prisma-next/mongo-orm
@prisma-next/mongo-query-builder
@prisma-next/mongo-lowering
@prisma-next/mongo-wire
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-builder
@prisma-next/target-postgres
@prisma-next/target-sqlite
@prisma-next/adapter-postgres
@prisma-next/adapter-sqlite
@prisma-next/driver-postgres
@prisma-next/driver-sqlite
commit: |
Captures the Mongo-family-specific authoring story that was deliberately out of scope for Subsystem 7 (which owns runner-side concepts): - Authoring entry points (migration plan/new --target mongo). - Target-owned scaffolding pipeline (planner IR, two renderers, factory modules) with cross-links to ADRs 194/195/196/197/198. - $jsonSchema validator generation, including the FL-11 / TML-2448 gap where Float fields are silently dropped. - Polymorphic-variant behaviour today and the FL-09/FL-10 / TML-2447 gaps in the planner. - Author-editable seams: how migration.ts and ops.json relate, what the runner guarantees about the round-trip, and where Subsystem 7 owns the rest. Cross-links Subsystem 7 throughout instead of restating runner-side concepts. No implementation changes. Refs: TML-2452 Signed-off-by: Will Madden <madden@prisma.io>
…em 7 Add an "Authoring intermediate state" section to Subsystem 7 covering the on-disk shapes a migration package can take (placeholder-bearing vs filled), the runner visibility rule (manifest presence, not ops.json), how the state arises in planner-led and custom flows, what self-emitting migration.ts does to materialize ops.json, and the diagnostics a user sees while in the state. Refs: TML-2455 Signed-off-by: Will Madden <madden@prisma.io>
Adds docs/architecture docs/subsystems/11. CLI.md describing the CLI as a subsystem: distribution model (bin-only prisma-next vs scoped @prisma-next/cli, per ADR 211), command surface, init flow architecture (module decomposition under src/commands/init/), programmatic API subpaths, and layering (framework/tooling/migration plane). The doc is architectural and cross-links the CLI Style Guide and ADR 211 for normative UX/distribution content rather than restating it. Adds back-links from the Style Guide intro and ADR 211 references, and lists the new doc plus the existing MongoDB Family doc in the Architecture Overview subsystems index (the Mongo entry was previously missing). Refs: TML-2451 Signed-off-by: Will Madden <madden@prisma.io>
Mirrors `packages/3-extensions/test-contract-space/` (the contract-space
fixture) and `packages/3-extensions/pgvector/` (the production reference
shape) so the new `@prisma-next/extension-cipherstash` package boots
with the standard tsdown / tsconfig / vitest / biome layout. Public
package (not `private`); single `./control` export entry; deps mirror
the closest workspace neighbour (test-contract-space) so the contract
space the next slice adds can compose `@prisma-next/family-sql/control`,
`@prisma-next/migration-tools/{hash,spaces}`, and the framework
contract types without further dependency churn.
R1 (TML-2397, project: extension-contract-spaces) is the skeleton-in
for cipherstash on the per-space planner / runner / verifier landed in
M1 + M2; the codec runtime and codec lifecycle hook are intentionally
deferred to M3 R2 (T3.4).
Refs: TML-2397
Land cipherstash as the first real consumer of the framework
contract-space mechanism shipped in M1 + M2 of the
extension-contract-spaces project. The descriptor exposes
`contractSpace: { contractJson, migrations, headRef }` so the per-space
planner / runner / verifier manage cipherstash the same way they
manage an application contract — no new framework code needed.
Contract IR (T3.1, sub-spec § 2 — TC-13 partial PASS).
Declares `eql_v2_configuration` with `(id text PK, state
eql_v2_configuration_state, data jsonb)` — the only typed object the
current SQL `StorageBase` IR can model directly. The
composite/enum/domain types the spec ultimately wants in IR
(`eql_v2_encrypted`, `eql_v2_configuration_state`, ORE composites,
`eql_v2.bloom_filter`/`hmac_256`/`blake3` domains) are recorded on
`contract.meta.cipherstashFutureIR` as a documentary placeholder so
they remain reviewable in the emitted `contract.json` but do not
perturb the planner, which has no first-class enum / composite /
domain vocabulary yet. See R1 return report for the deferral rationale
— surfacing IR-vocabulary expansion before the cipherstash skeleton
would inflate scope significantly and is best done after R1 lands the
framework wiring.
Baseline migration (T3.2, sub-spec § 3 — TC-12 structural PASS, AC7
gated on real bundle SQL).
A single migration package `20260601T0000_install_eql_bundle/` carrying:
- `cipherstash:install-eql-bundle-v1` — body = the vendored EQL
bundle SQL string, pulled from `./eql-bundle` so swapping in the
real bundle byte-for-byte is a single-file change at one seam
(the seam itself is now contract-frozen). The R1 string is a
clearly-marked placeholder; the real bundle is owned by the
cipherstash team and not yet checked into this monorepo.
- `cipherstash:create-eql_v2_configuration_state-v1`,
`cipherstash:create-eql_v2_configuration-v1`,
`cipherstash:create-eql_v2_encrypted-v1`, three
`cipherstash:create-eql_v2_<domain>-v1` ops, two
`cipherstash:create-eql_v2_<ore-composite>-v1` ops — the structural
create-* ops the M3 sub-spec § 3 op list requires. Authored as
full SQL-family runtime ops (`{ target, precheck, execute,
postcheck }`) and stored in `MigrationOps` (the framework-level
base shape) per the same pattern shipped in M2 R5
(`db-apply-per-space.cli.test.ts` SQLite fixture).
Each op carries a stable `cipherstash:*` invariantId; the ids are
centralised in `core/constants.ts` so the head-ref derivation, the
migration metadata, and any downstream consumer reference the same
literal strings (project spec FR11 — invariantIds are immutable once
published).
Head ref + descriptor wiring (T3.3, sub-spec § 4 — partial AC1 / AC8
prep).
`contractSpace.headRef.hash` is derived from the same
`computeStorageHash(...)` the assembler runs at family-create time, so
`assertDescriptorSelfConsistency` is satisfied unconditionally (locked
in the new descriptor test). `contractSpace.headRef.invariants` is the
sorted-deduped union of the baseline ops `invariantId`s. The
descriptor intentionally omits `databaseDependencies` — cipherstash is
greenfield on contract spaces and never used the legacy mechanism
(spec FR13).
Verification.
- 8 descriptor tests in `test/descriptor.test.ts` cover descriptor
identity, contract-space wiring, deferred-IR record, baseline op
set + namespace, head ref alignment, and `assertDescriptorSelfConsistency`
success.
- `pnpm typecheck` PASS.
- `pnpm lint:deps` PASS (no layering violations, no
target-* leakage in 1-framework).
- `pnpm --filter @prisma-next/extension-cipherstash build` PASS
(tsdown emits a 16 kB ESM control entry).
- `pnpm test:packages` clean except for an unrelated
`@prisma-next/mongo-runtime` flake on the first run that passed on
rerun (mongo-memory-server resource contention; pre-existing per
`wip/unattended-decisions.md`).
Refs: TML-2397
…3 R1) Reviewer second-pass on M3 R1 ratified the round and accepted all three implementer-flagged deviations: - Composite/enum/domain IR vocabulary deferred (option b) — endorsed as safe for AC-1 / AM11 because the verifier is symmetrically blind (SqlStorage, SqlSchemaIR introspector, and verifySqlSchema strict-mode extras-detection all enumerate Object.keys(schemaTables) only). - EQL bundle SQL is a clearly-marked placeholder at src/core/eql-bundle.ts. Reviewer second pass surfaced that the real bundle (eql-2.2.1, 5751 lines) is already vendored on sibling branch tml-2373-project-1-part-2 at c38c83b — option (e) port planned for M3 R2 per decision 14. - TS-authored vs JSON-on-disk layout — convention consistent with test-contract-space and pgvector. Sub-spec amendment candidate (orchestrator item 20). Plan-doc now reflects the M3 R1 round outcome. Notes on each task record the deviation + planned R2 follow-up.
Replace the R1 placeholder string with the byte-for-byte vendored EQL bundle (5,751 lines, version eql-2.2.1) cherry-picked verbatim from sibling branch tml-2373-project-1-part-2 at c38c83b. eql-bundle.ts now re-exports the generated constant under EQL_BUNDLE_SQL so consumers (the installEqlBundle migration op) keep the same import. Descriptor self-consistency still holds: headRef.hash digests the contract IR, the bundle lives inside the migration op's execute[] (only migrationHash changes). Unblocks T3.6 e2e against the real bundle in M3 R3.
…e hook (T3.4) Implement onFieldEvent per cipherstash sub-spec § 4: emit add_search_config on field-added(searchable: true), remove_search_config on field-dropped (searchable: true), and a single rotate-search-config op carrying remove + add SQL when both sides are searchable but other typeParams differ. Searchable!=true paths return [] — column-type changes are handled by structural ops the user emits. Each emitted op carries a deterministic cipherstash-codec:<table>.<field>:<action>@v1 invariantId. The descriptor exposes the hook under types.codecTypes.controlPlaneHooks (the path the SQL family extracts via extractCodecControlHooks), so planFieldEventOperations inlines the ops into the application-space migration alongside structural DDL. First real consumer of the M2 R1 codec-hook plumbing. Satisfies TC-14 / AC3.4.
…e layers (T3.5) Strengthen the existing absence check (R1) into a regression block that pins both syntactic and semantic shape: - the descriptor object literally omits the property, - the SQL family duck-type isDatabaseDependencyProvider returns false, - collectInitDependencies returns [] when given just the descriptor. Cipherstash never used the legacy databaseDependencies.init mechanism; this regression test prevents a future descriptor change from re-introducing the field by accident. Project spec FR13 — the contract-space mechanism replaces databaseDependencies for schema-contributing extensions. Satisfies TC-1 (clean cipherstash descriptor surface).
…descriptor AM11: with the installed package directory removed (a stand-in for rm -rf node_modules/@prisma-next/extension-cipherstash), the per-space verifier helpers (listPinnedSpaceDirectories, readPinnedHeadRef, verifyContractSpaces) succeed against pinned migrations/cipherstash/ files alone — no descriptor module resolution. AM12: re-running the materialisation pass (existence-check then writeExtensionMigrationPackage) against an already-emitted migrations/cipherstash/<dirName>/ leaves migration.json, ops.json, and contract.json byte-untouched. Holds even after the descriptor module is removed mid-test, mirroring the AM11 condition. Mirrors packages/1-framework/3-tooling/migration/test/ deletable-node-modules.test.ts (which exercises the same property for an abstract synthetic space) and pins it against the real cipherstash descriptor so a future refactor that accidentally introduced a descriptor import inside the verify-time code path would regress here. Locks AM11 + AM12 to full PASS for cipherstash.
Reviewer ratified M3 R2 as clean (verdict ANOTHER ROUND NEEDED only because T3.6 / T3.7 / T3.8 / T3.9 remain). All four R2 items landed clean and all three implementer-flagged deviations accepted. Plan-doc now reflects: - T3.4 (codec lifecycle hook) — landed at 28dec26; first real consumer of M2 R1 onFieldEvent plumbing; M2 R1 surface validated; 16 unit tests + 3 driving-tests through planFieldEventOperations. - T3.5 (databaseDependencies removal) — no-op for cipherstash (never had databaseDependencies); landed three-layer absence regression at e07b52a pinning syntactic property absence + isDatabaseDependencyProvider duck-type + collectInitDependencies collector. T3.6 / T3.7 / T3.8 / T3.9 remain. Per orchestrator decision 15, R3 will focus on T3.6 (PGlite e2e — heaviest item) and R4 will cover the mechanical close-out (T3.7 + T3.8 + T3.9).
…conflict (option c)
The EQL bundle (executed by installEqlBundleOp) creates every typed object
CipherStash needs (eql_v2 schema, eql_v2_configuration table, the configuration_state
enum, eql_v2_encrypted composite, the eql_v2.bloom_filter/hmac_256/blake3 domains,
and the ore_* composites). The structural cipherstash:create-*-v1 ops would
conflict with the bundle's CREATEs if both fired against the same database —
Postgres rejects duplicate CREATE TYPE/TABLE.
Resolution: the structural ops keep their stable cipherstash:* invariantIds
(so the marker's applied_invariants matches cipherstashHeadRef.invariants and
the verifier's invariantsMismatch gate passes) but their execute[] body becomes
a no-op SELECT 1. The bundle owns the typed-object DDL; the structural ops own
the invariantId ledger entry. Clean separation.
Considered alternatives:
- (a) Drop structural ops: would break headRef.invariants vs marker
invariantsMismatch — verifier integrity drops.
- (b) Wrap CREATEs with idempotency guards: Postgres has no CREATE TYPE/
DOMAIN IF NOT EXISTS — needs DO blocks with EXCEPTION handling.
Uglier, harder to reason about, doesn't help unify the marker semantics.
- (c) Selected: structural ops are bookkeeping only, SELECT 1 body.
Future extension: once the IR vocabulary models enums/composite types/domains
as first-class storage objects (cipherstashFutureIR block in contract.ts), the
structural ops will gain real verification work and precheck SQL probing
pg_type / information_schema. Today they are pure ledger entries.
Unblocks T3.6 e2e — the bundle + structural ops can now run together
end-to-end without conflict.
…veProvidedInvariants
The framework-mechanism sub-spec (§ 7.5 test extension example, line 372)
shows an additive-class baseline op carrying an invariantId, with a matching
headRef.invariants entry. The previous implementation only included data-class
ops in deriveProvidedInvariants, which left no path for an extension to track
additive opaque DDL (e.g. cipherstash's vendored EQL bundle) at the marker
level — the marker would be missing the invariants the headRef declared, and
the verifier's invariantsMismatch gate would reject the apply.
Resolution: deriveProvidedInvariants now reads invariantIds from any op,
regardless of operationClass. The two axes are intentionally orthogonal:
- operationClass classifies *policy gating* (additive / widening /
destructive / data — which kinds a db init / db update policy permits).
- invariantId classifies *marker bookkeeping* (which named bundles of work
a future regeneration knows to skip).
Cipherstash's installEqlBundleOp + structural create-* ops are the canonical
additive-with-invariantId case (cipherstash-migration.spec.md § 3): they are
additive-class (init policy admits them) but cannot be re-derived from contract
IR (so they need a stable invariantId for the marker to track).
Test changes: the existing "skips non-data ops" case is replaced with one
that asserts additive-with-invariantId ops contribute alongside data ops.
The "ignored stray invariantId on additive op" behaviour was the previous
test's only documented constraint and is now considered a feature, not a bug.
…ring codec
The Postgres planner refuses to render any column carrying typeParams unless
an expandNativeType (or planTypeOperations) hook is registered for the
column's codec id. Without this hook, an `Encrypted<String, { searchable: true }>`
column on the application's contract aborts planning at
packages/3-targets/3-targets/postgres/src/core/migrations/planner-ddl-builders.ts
with "no expandNativeType hook is registered for codecId cipherstash/string@1".
The DDL type for an Encrypted<string> column is always eql_v2_encrypted
regardless of any searchable typeParams: the search-config wiring is delivered
by the codec hook's add_search_config op (separate row in eql_v2_configuration),
not by the column type itself. The hook is therefore an identity — return
nativeType unchanged. arktype-json wires the same identity hook for the same
reason (jsonb is dimension-free; typeParams describe runtime validation only).
Unblocks the T3.6 e2e Scenario A test where the application contract carries
a real Encrypted<String, { searchable: true }> column wired to the cipherstash
codec.
Three-test e2e covering cipherstash sub-spec § 6 / T3.6: 1. AC-7 byte-equivalence on disk — pinned migrations/cipherstash/<dir>/ops.json carries the vendored EQL_BUNDLE_SQL byte-for-byte (locks the half of AC-7 R2 left gated behind "the test reads the on-disk file"). 2. Multi-space planning against the *real* cipherstash descriptor + bundle (mode=plan): asserts the plan includes the cipherstash baseline ops, the app-space CREATE TABLE User op, and the codec-hook-emitted add-search-config op for User.email — locks TC-14 (codec hook validated by its first real consumer) and the planning half of multi-space ordering. 3. Multi-space apply against a synthetic-bundle variant (mode=apply): asserts the runner's executeAcrossSpaces single-tx path lands marker rows for both spaces, the codec-hook-emitted SQL actually runs, and a literal eql_v2_encrypted insert + select round-trips through the composite type. Why a synthetic bundle for the apply leg? The vendored EQL bundle includes CREATE EXTENSION IF NOT EXISTS pgcrypto, which PGlite does not ship (verified by enumerating @electric-sql/pglite/dist/contrib — uuid_ossp is present, pgcrypto is not). Trying to apply the real bundle aborts PGlite at the WASM level (RuntimeError: unreachable). The synthetic stub creates the minimum surface the codec hook touches (eql_v2 schema, eql_v2_encrypted composite, eql_v2_configuration table, add_search_config / remove_search_config functions) so the framework + hook wiring is exercised end-to-end against a real database. The real-bundle apply against a pgcrypto-equipped Postgres is deferred to M3 R4 / TML-2373 e2e infra. Scenario B (drop searchable column) and Scenario C (extension version bump) are also deferred to R4 per the orchestrator's R3-only scope. New devDeps on cipherstash: @prisma-next/cli (executePerSpaceDbApply), @prisma-next/adapter-postgres, @prisma-next/driver-postgres, @prisma-next/target-postgres, @prisma-next/sql-schema-ir.
T3.6 Scenario A landed end-to-end against PGlite in M3 R3 with three substantive deviations all accepted by reviewer: 1. Structural-ops/EQL-bundle conflict resolved via option (c): bundle owns DDL authority, structural ops carry invariantIds with no-op SELECT 1 bodies. Cleanest of the three considered options; future- extension path (real precheck SQL on IR vocabulary expansion) documented in migrations.ts. 2. Framework widening to deriveProvidedInvariants: cipherstash is the first real consumer with invariantIds on additive (DDL) ops. Sub-spec § 7.5 line 372 already prescribes this shape (synthetic test extension); M1/M2 close didnt catch it because the synthetic test extension hard-codes providedInvariants. Reviewer verified scope-correctness: only cipherstash sets both fields; no existing extension regressed. 3. PGlite/pgcrypto blocker: real bundle declares CREATE EXTENSION pgcrypto; PGlites WASM contrib doesnt ship pgcrypto. Apply leg uses a synthetic-bundle variant scoped to one ops SQL body in one test; tests 1 (byte-equivalence) and 2 (planning) use the real bundle. Live-DB real-bundle apply deferred to TML-2373 e2e infra per orchestrator item 23. AC promotions: - AC-7 PASS in full (byte-equivalence on disk against real 5,751-line bundle). - AC3.6 / TC-14 / AC-12 PARTIALLY PASS at framework + synthetic level. R4 (M3 close-out) per orchestrator decision 16: T3.7 + T3.8 + T3.9 + item 21 (extract materialiseExtensionMigrationPackageIfMissing) + item 22 (constants drift cleanup, time-sensitive before publication). T3.10 explicitly de-scoped from this repo to TML-2373 e2e infra.
…alues with vendored bundle (M3 R4 item 22)
Pre-publication FR11 cleanup. The R1 stub list of ORE composites
(`[ore_block_u64_8_256, ore_cclw_u64_8]`) drifted from what the
vendored `eql-install.generated.ts` bundle actually CREATEs:
- missed two composites entirely (ore_block_u64_8_256_term, ore_cllw_var_8)
- typo on ore_cllw_u64_8 (was ore_cclw_u64_8)
Drift consequences before publication: (a) the structural
`cipherstash:create-eql_v2_<name>-v1` invariantId set was incomplete
(verifier can not assert these typed objects are owned by cipherstash),
and (b) the typo would have minted a stable invariantId for a name
that is never actually CREATEd by the bundle — once published, FR11
makes that immutable. R4 lands the fix while we still can.
Same pre-publication cleanup for `meta.cipherstashFutureIR.enums`:
the bundle declares the configuration_state enum with four values
(active, inactive, encrypting, pending); R1 listed only two.
Updated to match the bundle verbatim. The `meta.*` field is
documentary (computeStorageHash skips meta), so this widens the
documented IR shape without affecting CIPHERSTASH_STORAGE_HASH.
Net effect:
- cipherstashHeadRef.invariants gains two ids (term, var_8) and
swaps cclw->cllw — auto-derived from EQL_V2_ORE_COMPOSITE_TYPES
via CIPHERSTASH_INVARIANTS.createOreComposite, so migrations.ts
needs no edits.
- The structural ops list in cipherstashBaselineMigration.ops grows
from 8 to 10 (no-op SELECT 1 bodies per option (c)).
- CIPHERSTASH_STORAGE_HASH unchanged.
- All 37 existing cipherstash tests still pass; descriptor self-
consistency + AM11/AM12 regression remain green.
…Missing primitive (M3 R4 item 21)
Lift the existence-check + write-if-missing pattern out of two
copies (CLI 'runContractSpaceExtensionMigrationsPass' and the
cipherstash AM11/AM12 regression test) into a single primitive on
'@prisma-next/migration-tools/io':
materialiseExtensionMigrationPackageIfMissing(
targetDir: string,
pkg: MigrationPackageContents,
): Promise<{ readonly written: boolean }>
Returns '{ written: true }' on first run (writes manifest, ops,
contract.json) and '{ written: false }' when the package directory
already exists (skips silently — the AC-7 / AM12 by-existence
idempotency semantic from framework-mechanism.spec.md § 3).
Both call sites now consume the primitive:
- packages/1-framework/3-tooling/cli/.../contract-space-extension-
migrations-pass.ts: drops the inline pathExists helper + the
'pkgPath = join(spaceDir, dirName)' boilerplate. The pass remains
structurally identical.
- packages/3-extensions/cipherstash/test/am11-am12-no-descriptor.
test.ts: the rematerialiseSkippingExisting helper now wraps the
primitive directly — same behavioural property (re-running leaves
on-disk content byte-untouched) but exercises the *exact* code
path the CLI uses, instead of mirroring it.
Drains R2 deviation #3 (pathExists duplicate). Test count: +4 unit
tests on the primitive in migration-tools (431 -> 435 in that
package). Cipherstash test count unchanged (37 -> 37).
… AC-14)
Pure-fixture / no-DB regression for the cipherstash-migration sub-spec
§ 7 / project AC9: an extension version bump must produce a clean
file-system diff at 'migrate' time — pinned files refresh, the new
migration directory appears, and previously-emitted directories stay
byte-identical (AC-7 / AM12 by-existence skip).
The test builds two synthetic cipherstash-shaped descriptor versions
inline:
- v1: the eql_v2_configuration table with [id, state, data] and a
single baseline migration carrying cipherstash:install-eql-bundle-v1.
- v2: same table with an added 'audit_column' (mirrors sub-spec
Scenario C step 2), advancing the storage hash + appending a
second migration directory '20260615T0000_add_audit_column' that
carries cipherstash:add-audit-column-v1.
Calls the same primitives the CLI passes use directly:
- emitPinnedSpaceArtefacts (pinned contract.json/.d.ts/refs/head.json)
- materialiseExtensionMigrationPackageIfMissing (per-package idempotent write)
so the test exercises the framework code path without taking a CLI
dependency. Mirrors the am11-am12-no-descriptor test's "call the
primitive directly" pattern.
Asserts (all from sub-spec § 7):
- refs/head.json: hash + invariants advance to v2.
- contract.json: storage shape includes audit_column at v2.
- the new migration dir is created exactly once; the v1 baseline
dir is reported as skipped.
- the v1 baseline dir's migration.json/ops.json/contract.json are
byte-identical before vs after the bump.
- the new migration carries the audit-column op + invariantId, and
its manifest's from/to/providedInvariants point at the right hashes.
- a second bump-pass against v2 is a complete no-op (AM12 over the
full set, not just the baseline).
Brings the cipherstash test count to 44 (37 -> 44, +7 in this file).
Locks AC-14 / project AC9 / TC-25.
…ts (M3 R4 T3.9)
Three amendments to framework-mechanism.spec.md drained from the
M3 R3 + R4 amendment backlog:
- § 3 Extension migration-package materialisation: document the
M3 R4 lift of the existence-check + write-if-missing pattern out
of the CLI / cipherstash layer into a single primitive on
@prisma-next/migration-tools/io
(materialiseExtensionMigrationPackageIfMissing). The AC-7 / AM12
by-existence skip semantic now lives in one place; both call
sites consume it.
- § 4 Runner ordering helper: new amendment block that articulates
the orthogonality of operationClass (planner-policy gate) and
invariantId (cross-space marker bookkeeping). The M3 R3 widening
of deriveProvidedInvariants(ops) to collect invariantIds from
every op (regardless of operationClass) was the framework
aligning with this sub-spec's § 7.5 description, surfaced by
cipherstash's additive-class baseline ops being the first real
consumer to rely on the marker bookkeeping path.
- § 5 Codec lifecycle hook: new amendment block documenting the
SQLite RawSqlCall + rawSql({...}) factory landed in M2 R2 as
parallel infrastructure to the Postgres rawSql helper — codec
hooks emit raw-SQL ops at plan time on either target via the
same helper-shape contract.
…nts (M3 R4 T3.9)
Three amendments to cipherstash-migration.spec.md draining the M3
R1+R3+R4 deviation backlog into the canonical sub-spec:
- § 1 TS-authored layout (M3 R1 deviation): document that the
package authors contract + migrations directly in TypeScript
under src/core/ rather than committing pre-rendered .json files.
Both authoring styles produce the same ExtensionMigrationPackage
shape; the TS approach co-locates constants with the values that
consume them, which keeps the FR11 invariantId immutability
surface visible in code review, and treats the vendored EQL
bundle in eql-install.generated.ts as the byte-for-byte source
of truth.
- § 3 structural-ops × bundle-CREATEs conflict resolution
(M3 R3, option (c)): document that the EQL bundle owns the
typed-object DDL; structural cipherstash:create-*-v1 ops keep
their stable invariantIds (so verifier's invariantsMismatch gate
passes) but their execute[] is no-op SELECT 1. Future
IR-vocabulary expansion will give them real verification work.
- § 3 constants drift cleanup (M3 R4 item 22, FR11 pre-publication):
document the EQL_V2_ORE_COMPOSITE_TYPES sync (added
ore_block_u64_8_256_term + ore_cllw_var_8; fixed cclw -> cllw
typo) and the cipherstashFutureIR.enums sync (4 enum values:
active/inactive/encrypting/pending). Pre-publication so FR11
immutability not violated.
- § 6 Scenario D vacuously satisfied (M3 R4 T3.8 audit finding):
no cipherstash-specific strictVerification: false flag exists in
packages/ or examples/ today. All extant matches are orthogonal
(M2 R5 intentional per-space mechanism / mongo synthetic-contract
fixtures / pre-existing SQL adapter test fixtures from PR #386
+ #404). The "first attempt" workaround NFR1 wanted to revert
lived only in the abandoned projects/cipherstash-integration/
design and never landed in workspace packages. NFR1 / TC-23 /
AC-13 vacuously held.
…3.7/T3.8/T3.9 + items 21/22)
Plan-level status update closing M3 R4. Five tasks land cleanly:
- T3.7 Bump-cipherstash diff test (Scenario C, pure-fixture):
7-test suite at scenario-c-bump.test.ts, locks AC-14 / project AC9.
- T3.8 strictVerification: false workaround removal: vacuously
satisfied — no cipherstash-specific workaround exists in
packages/ or examples/. Sub-spec § 6 amended.
- T3.9 close-out doc-pass: framework-mechanism + cipherstash sub-spec
amendments landed in 8cab856 + 9beee65. NFR1 verified
vacuously via T3.8.
- Item 22 constants drift (time-sensitive, FR11 pre-publication):
EQL_V2_ORE composites + state enum values synced with vendored
bundle; new invariantIds auto-derived.
- Item 21 materialise primitive extraction: lifted into
@prisma-next/migration-tools/io; both call sites consume the
primitive.
T3.10 (real-bundle apply against pgcrypto-equipped Postgres) remains
de-scoped to TML-2373 e2e infra per orchestrator item 23. T3.6 stays
[~] partial (Scenario A landed M3 R3 against synthetic bundle).
M3 SATISFIED-candidate: every task either has [x] or is explicitly
de-scoped with a documented hand-off.
- Rename PinnedSpace identifiers to ContractSpace in cipherstash tests (emitContractSpaceArtefacts, listContractSpaceDirectories, readContractSpaceHeadRef, ContractSpaceHeadRecord). - Drop now-unused MigrationPackageContents alias; use MigrationPackage from @prisma-next/framework-components/control (same shape). - Replace removed writeExtensionMigrationPackage with materialiseMigrationPackage in tests and the new materialiseExtensionMigrationPackageIfMissing primitive.
The M2 fold relocated the test-contract-space fixture to test/integration/test/contract-space-fixture/ and dropped the workspace package, but left two stale dependency entries that prevent pnpm install from resolving the workspace.
…witches
Removes the "switch silently for the subthread" wording, which
contradicted the visibility rule in the same section ("each persona load
is visible in the workflow body"). Subthread persona overrides must now
be declared in-line and the return to the driving lens must be explicit
too, aligning the operating rule with the rest of the agent-personas
library.
Addresses CodeRabbit review #479 thread A01.
The three Example-openings blockquotes were adjacent with blank lines
between them, which triggers markdownlint MD028 (no blanks inside
blockquote). They are alternatives, not one continuous example, so
merging them (CodeRabbit's suggestion) would conflate three distinct
framings. Instead, separate each blockquote with a short prose connector
("For an engineering design…", "Or, for a product framing:", "Or,
mid-implementation…") so the blockquotes are no longer adjacent and the
examples stay clearly distinct.
Addresses CodeRabbit review #479 thread A02.
…TML-2463)
TML-2463 asks for the SQLite target on the aggregate planner / multi-space
runner pipeline. M2 R5 / M2.5 / M6 already delivered that by lifting the
shared SQL family onto the aggregate path, so the four ACs in the ticket
hold today. The one regression-coverage gap was that every existing
SQLite db init / db update aggregate test declared at least one extension
pack, leaving AC4 ("aggregate path collapses cleanly to one app member")
satisfied behaviourally but not pinned by a test.
Adds an explicit n=1 (no extensions, app-only) regression test that
exercises executeDbInit + executeDbUpdate against a fresh node:sqlite
database, asserts a single app marker row, asserts the user table is
created, and asserts the aggregate path short-circuits to a no-op on the
follow-up update.
Spec: projects/extension-contract-spaces/specs/sqlite-aggregate-verification.spec.md
records the audit trail, including why hasMultiSpaceRunner removal is
gated on TML-2408 (Mongo's degenerate executeAcrossSpaces shim still
satisfies the guard structurally).
Signed-off-by: Will Madden <madden@prisma.io>
…-only wording Drop the inline references to TML-2463 / AC4 (transient project artefacts) from both the it(...) title and the inline comment of the single-app-member regression test. The why is now expressed in terms of the behaviour itself: every other test in this file declares an extension pack, so this one exercises the empty-extensionPacks path through the aggregate loader / planner / runner. Aligns with .cursor/rules/doc-maintenance.mdc's source-code-comments clause (no milestone-task IDs or milestone-named acceptance criteria in source-code comments). Signed-off-by: Will Madden <madden@prisma.io>
Extend the always-applied git-staging rule so future agents are forced to pass -s on every git commit. The trailer is missed easily because the standard format.signoff=true git config option applies only to git format-patch, not git commit, and there is no commit.signoff knob. Adds an Always Sign Off Commits section with examples, common pitfalls (format.signoff vs commit, --amend, rebase), and a verify-before-push check. Updates the numbered Best Practices list to include the rule. Signed-off-by: Will Madden <madden@prisma.io>
Move the always-applied git commit best-practices rule to the canonical .agents/rules/ location so it stays the single source of truth for any agent runtime that loads .agents/rules (rather than maintaining a Cursor-specific copy under .cursor/rules). Removes the .cursor/rules/git-staging.mdc entry — no symlink, no stub, no listing in .cursor/rules/README.md — so there is no back reference from .cursor into the relocated file. Future readers find the rule at .agents/rules/git-staging.mdc only. File contents (including the just-added Signed-off-by section) are unchanged; this commit only relocates the file and strips the README mention. Signed-off-by: Will Madden <madden@prisma.io>
## Intent
`Collection.update()` and `Collection.delete()` returned a single row,
but their SQL was unrestricted: `UPDATE/DELETE … WHERE <filters>
RETURNING *` mutated every row matching the user's `where()` clause and
then the JS layer kept only `rows[0]`. The single-row return value
silently masked multi-row writes. This branch makes the runtime behavior
match the contract the return type already implies:
`update()`/`delete()` affect exactly one row.
`updateAll()`/`deleteAll()` remain the way to mutate every match.
## Change map
- `packages/3-extensions/sql-orm-client/src/collection.ts` — `update()`
and `delete()` now resolve a single matching primary key first and
re-issue the mutation scoped to that PK. A small private helper
(`#findFirstMatchingPkWhere`) does the lookup using the existing
`first()` infrastructure. `deleteAll()`'s body is extracted into
`#executeDeleteReturning` so `delete()` can call it on a cloned
(PK-narrowed) instance without tripping the `this:` constraint.
-
`packages/3-extensions/sql-orm-client/test/integration/update.test.ts`,
`delete.test.ts` — new integration tests assert the single-row
guarantee: seed three rows where two match `where()`, confirm exactly
one is mutated and the other untouched.
## The story
The bug is straightforward once you see it: `update()` at
[collection.ts](packages/3-extensions/sql-orm-client/src/collection.ts)
delegated to `updateAll()` and returned `rows[0] ?? null`. `updateAll()`
compiled `UPDATE … WHERE filters RETURNING *` with no LIMIT — Postgres
updated every row matching `filters` and returned them all. The caller
saw one row come back and assumed one row was changed. `delete()` had
the same shape via `deleteAll().toArray()`.
The fix mirrors what the existing nested-mutation path
(`updateFirstGraph` in `mutation-executor.ts`) already does: SELECT the
first matching PK, then issue the mutation scoped to that PK.
Concretely, `#findFirstMatchingPkWhere` reuses `first()` on a clone with
`selectedFields: [pkColumn], includes: []` to keep the lookup cheap,
then calls `buildPrimaryKeyFilterFromRow` + `shorthandToWhereExpr` to
turn the PK value into a `WHERE` expression. `update()` and `delete()`
clone themselves with that PK-only filter and delegate to the existing
multi-row path — `updateAll()` / a private `#executeDeleteReturning()`.
The `*All` methods are untouched and continue to mutate every match.
The cleanest fix would be to add a `LIMIT` clause to
`UpdateAst`/`DeleteAst` and let `update()` express itself as
`take(1).updateAll()`. That's not a small change: `UpdateAst` and
`DeleteAst` have no `limit` slot today, Postgres doesn't accept `UPDATE
… LIMIT` at all (only SQLite does, behind a build flag), and a portable
LIMIT requires rewriting `WHERE` to `pk IN (SELECT pk … LIMIT n)`. That
deserves its own branch — see follow-ups.
## Behavior changes & evidence
- `update()` and `delete()` now mutate at most one row regardless of how
many `where()` matches.
- `updateAll()`, `deleteAll()`, `updateCount()`, `deleteCount()` are
unchanged in behavior — they remain the multi-row variants.
- Mongo ORM is unaffected (its `update()`/`delete()` already use
`findOneAndUpdate`/`findOneAndDelete`, which are atomic and single-row
at the storage layer).
New integration tests covering the contract:
-
[update.test.ts](packages/3-extensions/sql-orm-client/test/integration/update.test.ts)
— `update() affects only one row even when where() matches several`
-
[delete.test.ts](packages/3-extensions/sql-orm-client/test/integration/delete.test.ts)
— `delete() affects only one row even when where() matches several`
Full `sql-orm-client` suite: 480/480 passing. `pnpm typecheck` and `pnpm
lint:deps` clean.
## Compatibility / migration / risk
- **Behavior change for callers relying on the bug.** Anyone calling
`users.where({ name: 'X' }).update({ … })` expecting all matching rows
to update will now only get one. The fix is to switch those call sites
to `updateAll(…)` / `deleteAll(…)`. The return-type signature already
implied single-row, so any such usage was incorrect against the
documented surface.
- **Non-atomic.** The fix issues two statements (SELECT PK, then
UPDATE/DELETE WHERE pk = …). Concurrent writes between the two could in
principle hit a stale PK. This matches what the existing nested-mutation
path (`updateFirstGraph`) already does, so it's consistent with the rest
of the codebase. Mongo's `update()`/`delete()` remain atomic via
`findOneAndUpdate`/`findOneAndDelete`.
- **Determinism.** Which row wins when several match is determined by
the SELECT's order. `#findFirstMatchingPkWhere` preserves the
collection's `orderBy`, so callers who care can chain `orderBy()` to
make the choice deterministic.
## Follow-ups / open questions
- Add `withLimit` to `UpdateAst`/`DeleteAst` and a portable
`UPDATE/DELETE … WHERE pk IN (SELECT pk … LIMIT n)` rewrite for
Postgres. That would let `update()`/`delete()` collapse into
`take(1).updateAll()` and would also enable
`updateAll(n)`/`deleteAll(n)`-style bounded mutations.
- Consider an atomic single-statement variant for SQL
`update()`/`delete()` once the AST supports it, to match Mongo's
atomicity.
## Non-goals / intentionally out of scope
- No changes to `updateAll`/`deleteAll`/`updateCount`/`deleteCount`.
- No changes to the AST, SQL renderers, or capability gating.
- No changes to the Mongo ORM.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added a public helper to determine a table's row-identity columns
(primary key or first unique).
* **Tests**
* Added integration tests ensuring update() and delete() modify only a
single matched row.
* Added unit tests for the new row-identity resolver covering PKs,
unique constraints, composites, and missing tables.
* **Refactor**
* Internal improvements to single-row update/delete behavior and
delete-returning execution for more reliable mutations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Serhii Tatarintsev <sergey@tatarintsev.me>
closes [TML-2390](https://linear.app/prisma-company/issue/TML-2390/index-type-registry) ## Intent Make `@@index(type:, options:)` a real first-class authoring surface across PSL and TS, replacing the inert `using`/`config` placeholders that round-tripped silently. Authors can now register their own index types via an extension pack — the registered shape narrows the TS DSL at the call site, validates the contract IR at runtime, drives `CREATE INDEX … USING <method> WITH (…)` DDL, and survives a round-trip through Postgres introspection. ParadeDB's bespoke `bm25Index()` helper is replaced with a registry entry; the helper and its column-builder namespace (`bm25.text`, `.numeric`, …) are deleted. ## Change map - **Registry primitive** ([packages/2-sql/1-core/contract/src/index-types.ts](packages/2-sql/1-core/contract/src/index-types.ts)) — `defineIndexTypes().add(literal, { options })` returns an `IndexTypeRegistration<TMap>` carrying both runtime `entries` and the type-level `IndexTypes` phantom map; `createIndexTypeRegistry()` builds the per-contract runtime registry (instance-based, not module-global; two contracts in the same workspace see different registries). - **IR rename** ([packages/1-framework/2-authoring/contract/src/descriptors.ts](packages/1-framework/2-authoring/contract/src/descriptors.ts); [packages/2-sql/1-core/contract/src/types.ts](packages/2-sql/1-core/contract/src/types.ts); [packages/2-sql/1-core/schema-ir/src/types.ts](packages/2-sql/1-core/schema-ir/src/types.ts)) — `using/config` → `type/options` in lockstep across IR, schema-IR, validators, lowering, emitter, and the ParadeDB helper. - **Authoring-time validator** ([packages/2-sql/2-authoring/contract-ts/src/build-contract.ts](packages/2-sql/2-authoring/contract-ts/src/build-contract.ts), [packages/2-sql/1-core/contract/src/validate.ts](packages/2-sql/1-core/contract/src/validate.ts)) — index-type validation runs at the shared `ContractIR → Contract` lowering seam (`buildSqlContractFromDefinition`), where both TS (`defineContract`) and PSL (`interpretPslDocumentToSqlContract`) authoring converge. The lowering builds a per-contract `IndexTypeRegistry` from the definition's attached packs and runs `validateIndexTypes` against the in-memory contract; bad types or option shapes throw at authoring time, not at runtime. Runtime `validateContract` keeps structural / referential / semantic validation but no longer needs a registry. - **Postgres DDL** ([packages/3-targets/3-targets/postgres/src/core/migrations/operations/indexes.ts](packages/3-targets/3-targets/postgres/src/core/migrations/operations/indexes.ts)) — `createIndex` emits `USING <method> WITH (key = literal, ...)`; framework-owned scalar renderer with `escapeLiteral` quoting. - **Postgres planner** ([packages/3-targets/3-targets/postgres/src/core/migrations/issue-planner.ts](packages/3-targets/3-targets/postgres/src/core/migrations/issue-planner.ts)) — both the `missing_table` and `index_mismatch` branches now look up the contract index by columns and thread `name` / `type` / `options` through to `CreateIndexCall`. Without this, adding a typed index to an existing table emitted a default-btree. - **Migration diff** ([packages/2-sql/9-family/src/core/schema-verify/verify-helpers.ts](packages/2-sql/9-family/src/core/schema-verify/verify-helpers.ts)) — index identity is now `(columns, type, options)`; mismatch on any one emits `index_mismatch` + `extra_index` (= DROP + CREATE). - **Pack-registration threading** ([packages/2-sql/2-authoring/contract-ts/src/contract-types.ts](packages/2-sql/2-authoring/contract-ts/src/contract-types.ts); [packages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts](packages/2-sql/2-authoring/contract-ts/src/composed-authoring-helpers.ts)) — `ExtractIndexTypesFromPack<P>` reads `P['indexTypes']` typed as `IndexTypeRegistration<infer M>` (no separate `__indexTypes` phantom); the merged map flows through `ComposedAuthoringHelpers.model: PackAwareModel<...>` into `ContractModelBuilder` (sixth `IndexTypes` generic) and `SqlContext<Fields, IndexTypes>` to a discriminated-union `IndexInput<Name, IndexTypes>` so `constraints.index({ type, options })` narrows at the call site. - **PSL** ([packages/2-sql/2-authoring/contract-psl/src/psl-attribute-parsing.ts](packages/2-sql/2-authoring/contract-psl/src/psl-attribute-parsing.ts); [packages/2-sql/2-authoring/contract-psl/src/interpreter.ts](packages/2-sql/2-authoring/contract-psl/src/interpreter.ts)) — `@@index([cols], type: "...", options: { k: "v" })` parses; V1 admits string-literal leaves only. - **ParadeDB on the registry** ([packages/3-extensions/paradedb/src/types/index-types.ts](packages/3-extensions/paradedb/src/types/index-types.ts); [packages/3-extensions/paradedb/src/core/descriptor-meta.ts](packages/3-extensions/paradedb/src/core/descriptor-meta.ts)) — `defineIndexTypes().add('bm25', { options: type({ '+': 'reject', key_field: 'string' }) })` replaces ~190 lines of bespoke helpers; the pack metadata stores the registration directly in `indexTypes:` (no separate phantom field). - **Postgres introspection** ([packages/3-targets/6-adapters/postgres/src/core/control-adapter.ts](packages/3-targets/6-adapters/postgres/src/core/control-adapter.ts)) — joins `pg_am`, reads `pg_class.reloptions`; drops `btree` to `undefined` (Postgres default) so a default-method contract index matches without spurious DROP+CREATE. - **ADR** ([docs/architecture docs/adrs/ADR 210 - Index-type registry.md](docs/architecture%20docs/adrs/ADR%20210%20-%20Index-type%20registry.md)). ## The story The contract IR already had inert `using`/`config` fields. The shape of the registry came first: rename to `type`/`options` (matching Prisma's `@@index(type:)`), put the registered shape in *one* place, and make every consumer derive from that one place. **The registry is per-contract, not module-global.** A pack stores a single registration value (the output of `defineIndexTypes()`) on its metadata; contract assembly creates a fresh registry and registers each pack's entries into it. Two contracts in the same workspace with different pack lists see different valid type sets. The first attempt at this used a module-level `Map`; the maintainer flagged that as wrong (`WHY IS REGISTRY A GLOBAL?`) and the design moved to the instance-based shape it has now. **One field, both halves.** The factory builder produces a value with two surfaces: a runtime `entries` array and a TypeScript-only `IndexTypes` phantom map. The pack stores that value verbatim in its `indexTypes` field; the builder type extends a read-only `IndexTypeRegistration<TMap>` interface so the pack can't be misused as a mutable registry from outside. Both halves stay in lockstep automatically — adding `.add('hnsw', …)` to the builder updates runtime registration and call-site narrowing in one step. **Type-level threading lands at the call site, not at `defineContract`.** The registered shape flows through three legs: (1) `indexTypes` registration on the pack, (2) `MergeAllPackIndexTypes` over family + target + extension packs in `ComposedAuthoringHelpers`, (3) a sixth `IndexTypes` generic on `ContractModelBuilder` that threads through `SqlContext<Fields, IndexTypes>` to a discriminated-union `IndexInput<Name, IndexTypes>`. The result: inside `defineContract({...}, ({ model }) => …)`, `constraints.index(cols.body, { type: 'made-up', options: {} })` is a TS error on that line — not buried fifty levels deep in a `defineContract` type. The bare `model()` import (no packs available) defaults `IndexTypes` to `Record<never, never>`, so only the no-options default-index form typechecks; there is no `WildcardIndexTypes` escape hatch. **Migration diff falls out of identity.** Schema-verify already grouped indexes by columns; once `type` and `options` are part of identity, a contract `gin` against a schema `btree` simply doesn't match — `verifyIndexes` emits `index_mismatch` for the new contract index and (in strict mode) `extra_index` for the stale schema index. The Postgres planner then looks up the contract index by columns from `ctx.toContract` and emits a fully-typed `CreateIndexCall` (DROP + CREATE). No `ALTER INDEX` path; Postgres has no clean ALTER for index method or `WITH`-key changes anyway. **Introspection had to come along.** Without it, every plan against a live DB would force DROP+CREATE on any contract index whose `type` is set. The Postgres adapter now joins `pg_am`, reads `pg_class.reloptions`, and the family verifier's `indexExtrasMatch` does string-coerced option comparison so contract `fillfactor: 70` matches an introspected `fillfactor: '70'` (Postgres returns reloption values as raw text regardless of the underlying scalar). `btree` introspects to `undefined` — it's the Postgres default — so a default-method contract index matches a default-method live index without ceremony. **ParadeDB shrinks dramatically.** The old `bm25Index()` helper plus the entire `bm25.{text, numeric, json, expression, …}` namespace and the `Bm25*FieldConfig` / `TokenizerId` types are deleted (~470 lines). What's left is one `defineIndexTypes().add('bm25', { options: type({ '+': 'reject', key_field: 'string' }) })` declaration plus a plain const `paradedbPackMeta` that stores it under `indexTypes:`. Per-field tokenizer / column configuration is deferred to a future expression-index surface. ## Behavior changes & evidence Authoring: `constraints.index(cols.body, { type: 'bm25', options: { key_field: 'id' } })` now typechecks against the merged pack registry; `type: 'made-up'`, missing required `options` keys, and unknown options keys are TS errors at the offending line. PSL's `@@index([body], type: "bm25", options: { key_field: "id" })` lowers to the same IR shape. Contract validation rejects unregistered types and bad options at runtime via arktype (strict-key rejection is registrant-opt-in via `'+': 'reject'`). Postgres DDL emission produces `CREATE INDEX … USING <method> WITH (…)` on both new tables and existing tables (typed indexes added later). Schema verification treats type/options as part of identity. Postgres introspection populates the new fields; the verifier's loose option comparison handles the contract/`reloptions` typing gap. Evidence: - Registry primitive: 11 cases in [packages/2-sql/1-core/contract/test/index-types.test.ts](packages/2-sql/1-core/contract/test/index-types.test.ts). - Validator: 9 cases in [packages/2-sql/1-core/contract/test/validate.test.ts](packages/2-sql/1-core/contract/test/validate.test.ts) (registered + valid, unregistered type, bad options, strict-mode extras, options-without-type, no-registry-defaults-empty, etc.). - Pack-registration machinery: 5 type-level cases in [packages/2-sql/2-authoring/contract-ts/test/contract-builder.index-types.test.ts](packages/2-sql/2-authoring/contract-ts/test/contract-builder.index-types.test.ts). - Postgres DDL: 8 cases in [packages/3-targets/3-targets/postgres/test/migrations/index-ddl.test.ts](packages/3-targets/3-targets/postgres/test/migrations/index-ddl.test.ts) (plain, USING-only, USING+WITH, mixed scalars, single-quote escape, null/NaN rejection). - Postgres planner `index_mismatch` threading: 3 cases in [packages/3-targets/3-targets/postgres/test/migrations/issue-planner.test.ts](packages/3-targets/3-targets/postgres/test/migrations/issue-planner.test.ts) (extras threaded; named index honored; default-name fallback). - Migration diff: 3 cases in [packages/2-sql/9-family/test/schema-verify.semantic-satisfaction.test.ts](packages/2-sql/9-family/test/schema-verify.semantic-satisfaction.test.ts) (type differs, options differ, exact match). - PSL parse: 7 cases in [packages/2-sql/2-authoring/contract-psl/test/interpreter.test.ts](packages/2-sql/2-authoring/contract-psl/test/interpreter.test.ts) (documented example, multi-key options, boolean/number leaf rejection, options-without-type, malformed object, no-extras path). - Call-site narrowing e2e with the real ParadeDB pack: [test/integration/test/authoring/paradedb-bm25-narrowing.test.ts](test/integration/test/authoring/paradedb-bm25-narrowing.test.ts). - PSL → IR → registry round-trip: [test/integration/test/authoring/psl-index-type-options.integration.test.ts](test/integration/test/authoring/psl-index-type-options.integration.test.ts) (5 cases incl. unregistered-type and empty-options rejection). - Introspection round-trip against PGlite: [packages/3-targets/6-adapters/postgres/test/migrations/index-introspection.integration.test.ts](packages/3-targets/6-adapters/postgres/test/migrations/index-introspection.integration.test.ts) (default btree, gin, options on default, options on non-default). ## Compatibility / migration / risk - **No backward-compat shim** for `using`/`config`. The fields were inert (no PSL or TS surface populated them), and the only in-repo writer was `bm25Index()` which is removed in the same PR. Out-of-repo consumers using the old field names will need to rename in lockstep. - **No `ALTER INDEX`** for type/options changes. Any change to columns, type, or options is `DROP INDEX` + `CREATE INDEX`. This is a property of how Postgres handles index method and `WITH`-key changes, not a regression introduced here. Lock and rebuild cost on large tables is non-trivial. - **PSL boolean/number leaves are rejected in V1.** `options: { fastupdate: false }` doesn't parse; only `options: { key: "value" }` (string leaves) works from PSL. TS authoring is unrestricted. The PSL grammar lift ships with the built-in-entries follow-up that actually needs non-string options. - **Future SQL adapters** would need their own rendering path. The IR vocabulary is dialect-neutral, but the only renderer in this repo is Postgres-shaped (`USING`/`WITH`). - **`constraints.index(...)` is array-only.** The single-field shortcut (`constraints.index(cols.x, { type, options })`) is removed; callers always pass the field list as an array (`constraints.index([cols.x], { type, options })`). Collapsing to one signature was what let TypeScript pin per-property errors at the offending key (e.g. an unknown `options` key, a wrong `type` literal, missing required option fields) instead of reporting a generic "no overload matches this call" at the call expression. In-repo callers were migrated in lockstep. ## Follow-ups / open questions - Built-in registry entries for `btree`, `hash`, `gin`, `gist`, `brin`, `spgist` (tracked separately). - PSL boolean/number leaves in `options` — needed once built-in entries land (`fastupdate: false`, `fillfactor: 70`). - Per-column index options (e.g. `gist`'s per-column operator classes) — out of scope; v1 carries `options` as a single object on the index, not per-column. - ParadeDB tokenizer / column configuration — deferred to expression-index support. - `pdb.*` query-builder operator surface (`@@@`, etc.) for ParadeDB BM25 search — separate workstream. ## Non-goals / intentionally out of scope - Built-in entries seeded by the framework (extension-pack-provided is the v1 path). - Capability gating per index type (capabilities describe runtime environment; the registry is the design-time vocabulary). - `ALTER INDEX` rendering paths. - Rendering paths for any future SQL adapter beyond Postgres reading `type`/`options`. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Index-type registry with per-type option validation; indexes may declare dialect-neutral `type` and `options` and are validated during contract assembly and lowering. * Postgres emission and introspection now surface index `type` and `options`; migrations treat changes to columns, type, or options as index changes. * **Refactor** * Index DSL/API now accepts columns as an array: constraints.index([ ... ]). * **Documentation** * ParadeDB README updated to show new index-type usage. * **Tests** * Extensive tests added for index-type registration, validation, introspection, DDL emission, and planner behavior. [](https://app.coderabbit.ai/change-stack/prisma/prisma-next/pull/430) <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Serhii Tatarintsev <tatarintsev@prisma.io> Co-authored-by: Will Madden <madden@prisma.io>
## Intent Two small cleanups to `@prisma-next/operations` and the surrounding emission machinery: 1. Delete the `operationTypes` pipeline. A `types.operationTypes` field on component descriptors flowed all the way through emission into a generated `OperationTypes` alias in `contract.d.ts` and a matching slot on `TypeMaps`, but neither was ever read at runtime. The only type-level consumer (`ResolveOperationTypes`) was exported and unimported. `queryOperationTypes` — the real type-level dispatch surface — is unaffected. 2. Stop encoding the operation name inside the descriptor object. `OperationRegistry.register(name, descriptor)` now takes the key explicitly, descriptor factories return a keyed `Record<string, Descriptor>`, and the per-operation `method` property goes away. ## Change map - **Core registry**: - [packages/1-framework/1-core/operations/src/index.ts](packages/1-framework/1-core/operations/src/index.ts) — `OperationDescriptor` loses `method`; new `OperationDescriptors` record alias; `register(name, descriptor)` takes the key as a parameter. - [packages/2-sql/1-core/operations/src/index.ts](packages/2-sql/1-core/operations/src/index.ts) — re-exports the new `OperationDescriptors` alias as `SqlOperationDescriptors`. - **`operationTypes` plumbing removed**: - [packages/1-framework/1-core/framework-components/src/control/control-stack.ts](packages/1-framework/1-core/framework-components/src/control/control-stack.ts) — drops `operationTypeImports` field and `extractOperationTypeImports` helper. - [packages/1-framework/1-core/framework-components/src/control/emission-types.ts](packages/1-framework/1-core/framework-components/src/control/emission-types.ts) — drops `operationTypeImports` from `ValidationContext`. - [packages/1-framework/3-tooling/emitter/src/emit.ts](packages/1-framework/3-tooling/emitter/src/emit.ts), [generate-contract-dts.ts](packages/1-framework/3-tooling/emitter/src/generate-contract-dts.ts), [emit-types.ts](packages/1-framework/3-tooling/emitter/src/emit-types.ts) — drop the `operationTypeImports` argument and stop emitting `export type OperationTypes = ...` in `contract.d.ts`. - [packages/2-sql/1-core/contract/src/types.ts](packages/2-sql/1-core/contract/src/types.ts) — drops `operationTypes` slot from `TypeMaps` and removes `OperationTypesOf` / `ResolveOperationTypes`. - [packages/2-mongo-family/1-foundation/mongo-contract/src/contract-types.ts](packages/2-mongo-family/1-foundation/mongo-contract/src/contract-types.ts) — drops `operationTypes` slot from `MongoTypeMaps`. - [packages/2-mongo-family/3-tooling/emitter/src/index.ts](packages/2-mongo-family/3-tooling/emitter/src/index.ts) — `MongoTypeMaps<...>` expression no longer references `OperationTypes`. - **Adapters / extensions migrated to keyed records**: - [packages/3-targets/6-adapters/postgres/src/core/descriptor-meta.ts](packages/3-targets/6-adapters/postgres/src/core/descriptor-meta.ts) — `postgresQueryOperations<CT>()` returns `QueryOperationTypes<CT>`; per-op type annotations on `impl` go away (the return-type annotation drives inference). - [packages/3-extensions/pgvector/src/core/descriptor-meta.ts](packages/3-extensions/pgvector/src/core/descriptor-meta.ts) — same migration; pgvector also drops the `types.operationTypes` import spec from its component descriptor. - [packages/3-extensions/pgvector/src/types/operation-types.ts](packages/3-extensions/pgvector/src/types/operation-types.ts), [exports/operation-types.ts](packages/3-extensions/pgvector/src/exports/operation-types.ts) — delete the now-unused `OperationTypes` alias. - [packages/3-mongo-target/2-mongo-adapter/src/core/operations.ts](packages/3-mongo-target/2-mongo-adapter/src/core/operations.ts) — `mongoVectorOperationDescriptors` becomes a keyed `OperationDescriptors` record. - **Runtime consumer**: - [packages/2-sql/5-runtime/src/sql-context.ts](packages/2-sql/5-runtime/src/sql-context.ts) — `SqlStaticContributions.queryOperations` now returns `SqlOperationDescriptors` (a record); the registration loop iterates `Object.entries`. - **Tests**: - [packages/1-framework/1-core/operations/test/operations-registry.test.ts](packages/1-framework/1-core/operations/test/operations-registry.test.ts), [packages/2-sql/1-core/operations/test/operations-registry.test.ts](packages/2-sql/1-core/operations/test/operations-registry.test.ts) — updated to the `register(name, descriptor)` signature; the "strips method from stored entry" test is deleted (no method to strip anymore). - [packages/1-framework/1-core/framework-components/test/control-stack.test.ts](packages/1-framework/1-core/framework-components/test/control-stack.test.ts) — drops the `extractOperationTypeImports` test block. - [packages/2-sql/1-core/contract/test/contract-typemaps-shape.test.ts](packages/2-sql/1-core/contract/test/contract-typemaps-shape.test.ts) — drops `operationTypes` shape assertions. - Snapshot/fixture updates across emitter, mongo emitter hook, pgvector, integration tests, and demo (`OperationTypes` line removed from `contract.d.ts`; `typeMaps.operationTypes` block removed from `contract.json`). ## The story 1. The `operationTypes` slot on `TypeMaps` and the matching `OperationTypes` alias in generated `contract.d.ts` were leftover scaffolding. The only thing that referenced them was `ResolveOperationTypes`, which was exported but never imported. Deleting the slot and the helper alias removes a surface that pretended to be load-bearing. 2. With nothing reading the slot, the upstream `extractOperationTypeImports` step on the control stack and the `operationTypeImports` argument threaded through the emitter become dead too. They get pulled out together so the contract pipeline has one fewer parallel "imports" channel that emits nothing meaningful. 3. The descriptor side of the registry was carrying a `method: string` field that did double duty as both the registry key and a value attached to each entry. The factory then `Object.freeze`'d an array of descriptors that the runtime walked just to read `descriptor.method` back. Switching to `register(name, descriptor)` lets factories return a `Readonly<Record<string, Descriptor>>` directly — the record's keys are the operation names, the descriptor stops carrying its own name, and the `as unknown as T` cast that existed only to strip `method` from the stored entry goes away. 4. Adapter and extension factories now annotate their return type as `QueryOperationTypes<CT>` (the type-level dispatch surface that already existed). That makes the type-level signature the single source of truth: per-operation `impl` no longer needs hand-written argument and return-type annotations, and any drift between the type signature and the implementation becomes a compile error at the factory boundary. ## Behavior changes & evidence - **`OperationRegistry.register` signature changes**: `register(descriptor)` → `register(name, descriptor)`. Descriptors no longer carry `method`. - **Why**: The name was duplicated (record key vs `descriptor.method`), and stripping `method` before storage required an `as unknown as T` cast. - **Implementation**: - [packages/1-framework/1-core/operations/src/index.ts](packages/1-framework/1-core/operations/src/index.ts) — registry definition. - [packages/2-sql/5-runtime/src/sql-context.ts](packages/2-sql/5-runtime/src/sql-context.ts) — registration loop in `createExecutionContext`. - **Tests**: - [packages/1-framework/1-core/operations/test/operations-registry.test.ts](packages/1-framework/1-core/operations/test/operations-registry.test.ts) - [packages/2-sql/1-core/operations/test/operations-registry.test.ts](packages/2-sql/1-core/operations/test/operations-registry.test.ts) - **Operation factories return keyed records**: `postgresQueryOperations`, `pgvectorQueryOperations`, and `mongoVectorOperationDescriptors` now expose `Record<name, descriptor>` shapes (typed as `QueryOperationTypes<CT>` for the SQL side, `OperationDescriptors` for mongo). - **Why**: The factory's return type now matches the type-level signature exactly; `impl` no longer needs per-arg annotations because the surrounding signature constrains them. - **Implementation**: - [packages/3-targets/6-adapters/postgres/src/core/descriptor-meta.ts](packages/3-targets/6-adapters/postgres/src/core/descriptor-meta.ts) - [packages/3-extensions/pgvector/src/core/descriptor-meta.ts](packages/3-extensions/pgvector/src/core/descriptor-meta.ts) - [packages/3-mongo-target/2-mongo-adapter/src/core/operations.ts](packages/3-mongo-target/2-mongo-adapter/src/core/operations.ts) - **Tests**: covered transitively by the registry tests above plus the existing pgvector/postgres fixture-based tests; no new behavior introduced. - **Generated `contract.d.ts` no longer declares `OperationTypes`; `contract.json` no longer carries `typeMaps.operationTypes`**: the alias and the slot are gone. `TypeMaps` (SQL) and `MongoTypeMaps` shrink by one type parameter; `MongoTypeMaps<...>` is now `<CodecTypes, FieldOutputTypes, FieldInputTypes>`. - **Why**: Nothing read either side of this surface — the alias existed but was never imported by user code paths or by any helper that fed back into runtime behavior. - **Implementation**: - [packages/1-framework/3-tooling/emitter/src/generate-contract-dts.ts](packages/1-framework/3-tooling/emitter/src/generate-contract-dts.ts) - [packages/2-sql/1-core/contract/src/types.ts](packages/2-sql/1-core/contract/src/types.ts) - [packages/2-mongo-family/1-foundation/mongo-contract/src/contract-types.ts](packages/2-mongo-family/1-foundation/mongo-contract/src/contract-types.ts) - [packages/2-mongo-family/3-tooling/emitter/src/index.ts](packages/2-mongo-family/3-tooling/emitter/src/index.ts) - **Tests**: - [packages/2-sql/1-core/contract/test/contract-typemaps-shape.test.ts](packages/2-sql/1-core/contract/test/contract-typemaps-shape.test.ts) — typemap shape assertions updated. - [packages/2-sql/3-tooling/emitter/test/emitter-hook.generation.basic.test.ts](packages/2-sql/3-tooling/emitter/test/emitter-hook.generation.basic.test.ts), [advanced](packages/2-sql/3-tooling/emitter/test/emitter-hook.generation.advanced.test.ts) — generated-output assertions updated. - All committed `contract.d.ts` / `contract.json` fixtures regenerated to match the new shape. ## Compatibility / migration / risk - Generated `contract.d.ts` and `contract.json` change shape. Anyone hand-importing the generated `OperationTypes` alias or reading `typeMaps.operationTypes` from a contract JSON would break — but no caller in this repo does, and neither was a documented public surface. - `TypeMaps` (SQL) and `MongoTypeMaps` lose a type parameter (the `TOperationTypes` slot). Direct callers that explicitly named that parameter will need to drop it. There are no such callers in the repo. - `OperationRegistry.register` and `SqlStaticContributions.queryOperations` change shape. Both are internal extension-point APIs; all in-repo extensions and adapters are migrated in the same change. ## Non-goals / intentionally out of scope - `queryOperationTypes` (the live type-level dispatch surface) is intentionally untouched. - No new operations or behavior; this is purely a cleanup. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Query operations now register and surface by name, enabling clearer operation lookup and registration. * **Refactor** * Contracts no longer expose separate legacy operation-type aliases; query operation types are composed alongside codec types. * Emitter and tooling inputs simplified to accept codec and query-operation imports only. * Operation registry API switched to name-first registration. * **Documentation** * Updated docs and examples to reflect the new descriptor and emitter shapes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Serhii Tatarintsev <tatarintsev@prisma.io>
closes [TML-2483](https://linear.app/prisma-company/issue/TML-2483/pn-runtime-requiremarker-false-errors-when-the-marker-table-is-absent) ## Intent `postgresServerless` ships with `verify: { mode: 'onFirstUse', requireMarker: false }` as the default, advertised as "no marker required." In practice it only tolerated the marker *row* being absent — if the `prisma_contract.marker` *table* itself didn't exist (e.g. PN attached to a database that was never `db init`'d, the common shape for an existing Prisma-ORM Postgres being incrementally migrated to PN), the marker `SELECT` errored out before `requireMarker` was consulted and surfaced as a raw `SqlQueryError: relation "prisma_contract.marker" does not exist`. This change widens `requireMarker: false`'s tolerance to the missing-table case and reshapes the adapter SPI so the runtime receives a tagged result that names *which* absence — `no-table`, `absent`, or `present` — without ever catching a driver error string. ## Key snippet — the runtime collapses to a single switch ```ts // AFTER (verifyMarker) const readResult = await this.familyAdapter.markerReader.readMarker(this.driver); if (readResult.kind !== 'present') { if (this.verify.requireMarker) { throw runtimeError('CONTRACT.MARKER_MISSING', 'Contract marker not found in database'); } this.verified = true; return; } const marker = readResult.record; // …storage-hash + profile-hash checks unchanged… ``` The probe + read + parse mechanics now live in each adapter; the runtime no longer assembles SQL or interprets rowcounts. ## How to compare approaches The branch deliberately preserves two attempts at the fix so reviewers can evaluate the design pivot: - `git show 265c9a3` — **Approach A** in isolation. Adds `markerExistsStatement()` next to the existing `readMarkerStatement`/`parseMarkerRow` on `AdapterProfile`; the runtime probes first, then reads. - `git show 3b54fc7` — **A → B delta**. Replaces those three statement-returning methods with one `readMarker(queryable): Promise<MarkerReadResult>`; the runtime collapses to a single switch. - `git diff 5b2fc03 3b54fc7` — **B in isolation**, applied directly on the failing-tests baseline. The HEAD shipping state is B. A is preserved in history for the record but is not the artefact under review. ## The story 1. **Pin the regression first.** A failing integration test (Postgres via `postgresServerless`) and a failing e2e test (SQLite, raw runtime) reproduce the bug — both surface `SqlQueryError` from the verify path when the marker table is absent, including under `requireMarker: false`. These tests stay green across both Approach A and Approach B so they act as a permanent regression gate. 2. **Reshape the adapter SPI.** `AdapterProfile`'s three statement-returning marker methods (`markerExistsStatement` / `readMarkerStatement` / `parseMarkerRow`) collapse into one `readMarker(queryable): Promise<MarkerReadResult>`. The adapter owns the full probe-then-read-then-parse flow and returns a tagged union so callers can distinguish the absence modes without inspecting rowcounts or driver errors. 3. **Implement the probe per target.** Postgres probes `information_schema.tables` for `prisma_contract.marker`; SQLite probes `sqlite_master` for `_prisma_marker`. A `'no-table'` probe result short-circuits before the read SQL is issued, so the missing-table case never reaches the driver's "relation does not exist" path. 4. **Simplify `verifyMarker`.** The runtime switches on `result.kind`: anything other than `'present'` is treated as "no marker present" and dispatched into the existing `requireMarker` policy. Storage- and profile-hash checks run unchanged against the parsed record when `'present'`. 5. **Mongo is unaffected.** `findOne` on a non-existent collection returns `null` natively; there's no SQL-style "table missing" error path. The marker SPI being SQL-only is preserved. ## Change map ### Implementation - [packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts (L11–L29)](packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts) — new `MarkerReadResult` tagged union; `AdapterProfile.readMarker` replaces the three statement methods. - [packages/2-sql/5-runtime/src/runtime-spi.ts (L4–L9)](packages/2-sql/5-runtime/src/runtime-spi.ts) — `MarkerReader` mirrors the new shape. - [packages/2-sql/5-runtime/src/sql-runtime.ts (L401–L447)](packages/2-sql/5-runtime/src/sql-runtime.ts) — `verifyMarker` collapses to a single switch on `result.kind`. - [packages/3-targets/6-adapters/postgres/src/core/adapter.ts (L41–L75)](packages/3-targets/6-adapters/postgres/src/core/adapter.ts) — Postgres `readMarker`: probes `information_schema.tables`, then reads `prisma_contract.marker`. - [packages/3-targets/6-adapters/sqlite/src/core/adapter.ts (L52–L548)](packages/3-targets/6-adapters/sqlite/src/core/adapter.ts) — SQLite `readMarker`: probes `sqlite_master`, then reads `_prisma_marker`. ### Tests - [test/integration/test/runtime.verify-marker.missing-table.integration.test.ts (L13–L98)](test/integration/test/runtime.verify-marker.missing-table.integration.test.ts) — Postgres against a real DB without `db init`: default verify, `mode: 'always'`, and `requireMarker: true`. Asserts a missing marker table is tolerated under `requireMarker: false` and surfaces a structured `CONTRACT.MARKER_MISSING` (not raw `SqlQueryError`) under `requireMarker: true`. - [test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts (L88–L132)](test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts) — SQLite counterpart: same three scenarios against a `DatabaseSync` provisioned with user tables but no `_prisma_marker`. - [packages/2-sql/5-runtime/test/marker-verification.test.ts (L190–L308)](packages/2-sql/5-runtime/test/marker-verification.test.ts) — unit-level coverage of each `MarkerReadResult` kind: `no-table` + `requireMarker: true` → MISSING, `no-table` + false → skip, `absent` mirrored, `present` with hash match → pass, `present` with profile-hash mismatch → MISMATCH. Verifies `readMarker` is called once and cached when `mode: 'startup'`. - [packages/3-targets/6-adapters/postgres/test/adapter.test.ts (L290–L353)](packages/3-targets/6-adapters/postgres/test/adapter.test.ts) — adapter contract tests: probe goes to `information_schema.tables`; absent-row path returns `{ kind: 'absent' }`; present-row path parses into a `ContractMarkerRecord`. ## Behavior changes & evidence - **`requireMarker: false` tolerates a missing marker table.** Before: a missing `prisma_contract.marker` / `_prisma_marker` surfaced as `SqlQueryError` from the verify path before `requireMarker` was consulted. After: the adapter's probe short-circuits before any read SQL is issued, the runtime sees `{ kind: 'no-table' }`, and `requireMarker: false` skips verification cleanly. - **Why**: the documented contract of `requireMarker: false` reads as "I am OK operating without a marker." That should cover both forms of absence — the table existing with no row, *and* the table not existing at all. Splitting those was an asymmetric leak of SQL-level mechanics through what's meant to be a policy knob. - **Implementation**: - [packages/2-sql/5-runtime/src/sql-runtime.ts (L401–L411)](packages/2-sql/5-runtime/src/sql-runtime.ts) - [packages/3-targets/6-adapters/postgres/src/core/adapter.ts (L56–L62)](packages/3-targets/6-adapters/postgres/src/core/adapter.ts) - [packages/3-targets/6-adapters/sqlite/src/core/adapter.ts (L523–L529)](packages/3-targets/6-adapters/sqlite/src/core/adapter.ts) - **Tests**: - [test/integration/test/runtime.verify-marker.missing-table.integration.test.ts (L51–L80)](test/integration/test/runtime.verify-marker.missing-table.integration.test.ts) - [test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts (L100–L120)](test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts) - **`requireMarker: true` surfaces `CONTRACT.MARKER_MISSING` (structured) when the marker table is absent.** Before: callers received a raw `SqlQueryError: relation "prisma_contract.marker" does not exist` (SQLSTATE 42P01) wrapped through the runtime'\''s error normalisation. After: they get the same `CONTRACT.MARKER_MISSING` envelope already used when the marker row is absent. - **Why**: structured error codes are the catch surface for observability and operational tooling. Serverless callers (Cloudflare Workers + Hyperdrive in particular) routinely have the wrapping `ErrorExternal` strip the underlying `cause` chain on the way to logs, so a raw `SqlQueryError` with SQLSTATE 42P01 was invisible without a debug build — operationally indistinguishable from the runtime hanging or returning an empty response. - **Implementation**: [packages/2-sql/5-runtime/src/sql-runtime.ts (L404–L407)](packages/2-sql/5-runtime/src/sql-runtime.ts) - **Tests**: - [test/integration/test/runtime.verify-marker.missing-table.integration.test.ts (L84–L97)](test/integration/test/runtime.verify-marker.missing-table.integration.test.ts) - [test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts (L122–L130)](test/e2e/framework/test/sqlite/runtime.verify-marker.missing-table.test.ts) - [packages/2-sql/5-runtime/test/marker-verification.test.ts (L204–L213)](packages/2-sql/5-runtime/test/marker-verification.test.ts) - **Adapter SPI: marker reading owns its full flow.** Before: `AdapterProfile` exposed three statement-returning methods (`markerExistsStatement`, `readMarkerStatement`, `parseMarkerRow`) and the runtime composed them with explicit `driver.query` calls and rowcount checks. After: one method, `readMarker(queryable): Promise<MarkerReadResult>`, returns a tagged result. The runtime never assembles marker SQL or interprets driver rowcounts. - **Why**: the control adapter already used the same `readMarker(driver, space): Promise<...>` shape with the same probe-then-read pattern; the runtime SPI was the only place still leaking statement primitives. Consolidating on one shape removes the redundancy, lets adapters return richer diagnostic information than a rowcount can carry, and (incidentally) makes test stubs ~3× shorter because they no longer have to coordinate `driver.query` mocks with fake marker rows. - **Implementation**: - [packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts (L15–L29)](packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts) - [packages/2-sql/5-runtime/src/runtime-spi.ts (L4–L9)](packages/2-sql/5-runtime/src/runtime-spi.ts) - **Tests** (SPI contract): - [packages/2-sql/5-runtime/test/sql-family-adapter.test.ts (L22–L60)](packages/2-sql/5-runtime/test/sql-family-adapter.test.ts) - [packages/3-targets/6-adapters/postgres/test/adapter.test.ts (L290–L353)](packages/3-targets/6-adapters/postgres/test/adapter.test.ts) - **Test scaffolding sheds driver-mock coupling for marker reads.** Refactor, no behaviour change. The nine test stubs that synthesise an `AdapterProfile` previously returned SQL strings that a mocked `driver.query` had to interpret; under the new shape they return a `MarkerReadResult` directly. `marker-verification.test.ts` lost its `MarkerRow` plumbing entirely; `marker-vs-intercept-ordering.test.ts` lost its `createStaleMarkerDriver` helper. - **Implementation**: all `packages/2-sql/5-runtime/test/*.test.ts` stubs and [packages/2-sql/4-lanes/relational-core/test/utils.ts (L10–L26)](packages/2-sql/4-lanes/relational-core/test/utils.ts). ## Compatibility / migration / risk - **SPI break for out-of-tree SQL adapters.** Any adapter implementing `AdapterProfile.readMarkerStatement` / `parseMarkerRow` / `markerExistsStatement` must implement `readMarker(queryable): Promise<MarkerReadResult>` instead. In-tree adapters (Postgres, SQLite) are updated; no third-party SQL adapters are known. - **Mongo unaffected.** The MongoDB target has its own marker ledger (`packages/3-mongo-target/1-mongo-target/src/core/marker-ledger.ts`) that uses `findOne` on a collection — schemaless storage naturally returns `null` for the missing-collection case, so no change is needed there. - **No runtime perf regression.** The probe adds at most one round-trip per `verifyMarker` invocation, which itself runs at most once per runtime under the default `mode: 'onFirstUse'`. `mode: 'always'` users pay the extra probe per query but that mode is already an opt-in correctness/perf trade. - **`parseContractMarkerRow` remains exported.** Still used by control adapters, the SQLite migration runner, and the `@prisma-next/sql-family` verify path; only its presence as an `AdapterProfile` SPI method goes away. ## Follow-ups / open questions - **Docs**: `packages/3-targets/6-adapters/sqlite/README.md` and `docs/architecture docs/patterns/adapter-spi.md` still reference the old `readMarkerStatement()` SPI surface. Not load-bearing for runtime behaviour but should be updated before this lands as the canonical adapter pattern. - **Telemetry**: the runtime currently flattens `no-table` and `absent` into the same `verified = true` path. The richer `MarkerReadResult` makes it trivial to emit a structured telemetry event distinguishing "fresh DB" from "marker row gone" — left for a follow-up to avoid expanding scope. - **Consolidation with control adapter**: the control adapter's `readMarker(driver, space)` and the runtime adapter's `readMarker(queryable)` are now almost identical in shape. A future cleanup could collapse them. ## Non-goals / intentionally out of scope - Changing the `verify.mode` enum (`'onFirstUse' | 'startup' | 'always'`). There's no "off" today; whether to add one is a separate design decision. - Auto-creating the marker table from the runtime when missing. This crosses control-plane / runtime boundaries; the existing escape hatch is `prisma-next db init`. - Suppressing genuine hash-mismatch errors. `CONTRACT.MARKER_MISMATCH` still fires whenever a `present` marker disagrees with the contract's `storageHash` or `profileHash`. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Enhanced database contract marker verification with improved handling of missing marker tables and rows; the system now distinguishes between an absent marker and a missing marker storage table. * **Tests** * Added comprehensive test coverage for missing marker table scenarios across SQLite and Postgres database runtimes. [](https://app.coderabbit.ai/change-stack/prisma/prisma-next/pull/483) <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Sævar Berg <berg@prisma.io>
## Intent
Wire up the paradedb extension's query plane on top of the bm25
index-type registration that landed on `psl-index-plus`. Authoring +
emit + DDL already work via the upstream registry; this PR adds the
runtime-side: a complete set of paradedb query operations, the proximity
chain builder, and an end-to-end CLI demo that exercises everything
against a live paradedb container.
## Change map
The extension itself (`packages/3-extensions/paradedb`) gains:
- **11 query operations** through two shared helpers in
`src/core/descriptor-meta.ts`:
- `matchOp` covers the five match-mode operators (`paradeDbMatch`/`@@@`,
`paradeDbMatchAny`/`|||`, `paradeDbMatchAll`/`&&&`,
`paradeDbTerm`/`===`, `paradeDbPhrase`/`###`).
- `typmodCastOp` covers the four typmod casts (`paradeDbFuzzy`,
`paradeDbBoost`, `paradeDbConst`, `paradeDbSlop`). Each wraps its
integer argument in `LiteralExpr.of(n)` because Postgres rejects
parameterized typmods, with per-op range validators.
- **`paradeDbScore`** for `pdb.score(<key>)`.
- **`ParadeDbProximityChain`** in `src/core/proximity-chain.ts` — an
immutable builder with `.within(distance, term, { ordered? })` for
multi-step chains mixing `##` and `##>`. Distances render as
`LiteralExpr` (chained operators only accept literal slop). The chain
implements `Expression<text>` so it composes through `paradeDbMatch` via
the `@@@` overload.
- **Runtime descriptor** at `src/exports/runtime.ts` and **`pg_search`
install** via `SqlControlExtensionDescriptor.databaseDependencies`.
`examples/paradedb-demo` is a new CLI example. `prisma/contract.ts`
authors a single `Item` model with a bm25 index using the upstream
`constraints.index([...], { type: 'bm25', options: { key_field: 'id' }
})` shape; `pnpm db:init` produces the actual `CREATE INDEX … USING bm25
…` DDL via the registry path. A `docker-compose.yaml` plus
`init/01-create-demo-db.sql` create a dedicated `demo` database to
sidestep the paradedb image's preloaded PostGIS tables.
CLI commands: `match`, `top`, `fuzzy`, `proximity`, `proximity-chain`,
`chain-demo` (hardcoded multi-step + mixed direction), `mode-tour`
(curated comparison of the five match modes against contrastive seed
data), and `cast-demo` (boost/const/slop).
Also includes a small adapter fix in
`packages/3-targets/6-adapters/postgres/src/core/control-adapter.ts` —
index introspection now orders columns by `array_position(indkey,
attnum)` rather than `attnum`, so multi-column indexes whose declared
column order differs from the table's column declaration order satisfy
schema verification.
## Behavior evidence
- `pnpm start -- mode-tour` returns distinct, contrastive results across
the five match modes against the seed data, demonstrating each
operator's character (any vs all vs term vs phrase).
- `pnpm start -- cast-demo` shows boost/const/slop modifying the BM25
score as expected (`pdb.boost(5)` raises score ~2 → ~10; `pdb.const(1)`
flattens to 1; `pdb.slop(1)` matches non-adjacent phrase tokens).
- 17 paradedb unit tests cover all 11 operations + the proximity chain.
## Follow-ups / open questions
- `paradeDbFuzzy/Boost/Const/Slop` are typed as returning `pg/text@1`;
actual SQL types are paradedb-specific (`pdb.fuzzy`, `pdb.boost`, etc.).
The TS fudge works because match operators are overloaded to accept
these types alongside `text`, but a more honest typing would introduce a
paradedb codec.
- `text[]` query input (term-set with `ARRAY[...]`, pretokenized phrase)
is deferred pending scalar-list support.
- Tokenizer-override casts (`'q'::pdb.whitespace`), highlight
(`pdb.snippet*`), aggregations, and search-time joins are not yet
exposed.
## Non-goals
- The bm25 index-type registry, adapter introspection of
`type`/`options`, and PSL `@@index` parsing all live in
`psl-index-plus`, not here.
- The legacy `bm25Index({...})` and `bm25.text/numeric/...` authoring
helpers the original branch carried were intentionally dropped in favor
of the generic `constraints.index({ type, options })` registry path.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Complete ParadeDB demo example with BM25 search, proximity chains,
fuzzy matching, and scoring operations.
* ParadeDB proximity chain query builder for advanced search patterns.
* New ParadeDB operation types and runtime exports.
* **Bug Fixes**
* Fixed Postgres index column ordering in schema introspection.
* **Refactoring**
* Standardized package exports field across multiple packages, removing
legacy main/module fields.
* **Tests**
* Added ParadeDB operation descriptor and BM25 integration test suites.
[](https://app.coderabbit.ai/change-stack/prisma/prisma-next/pull/433)
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Serhii Tatarintsev <tatarintsev@prisma.io>
Tentative intentional design notes for the Supabase integration work, captured before committing to a project shape. Eight focused docs covering: - end-to-end user story with the canonical code sample - modeled / tolerated / externally-managed / drift posture as a generic IR property, with verifier and planner dispatch - cross-contract-space FK references (refIn) — TS surface, IR carrier shape, resolution against the loaded aggregate, extension publish pipeline (PSL surface flagged as open design question) - PostgresRlsPolicy as a target-only IR kind, inline c.rlsPolicy DSL, plain-string predicates for v0.1, pg_policies-based verification - @prisma-next/extension-supabase package shape, createSupabaseRuntime with role-binding / SET LOCAL story, pool footgun mitigations - developer experience: scaffold, getting-started outline, migration story from @supabase/supabase-js - explicit deferred / non-goals list (realtime, storage, introspection- based emit, PSL refIn, etc.) Framed throughout as notes, not specs — open questions are explicit and the project shape (one project vs foundation+supabase split) is deliberately undecided. Builds on TML-2459 (target-extensible IR); foundation already covered there is not redone here.
- Promote function-level posture from open question to must-have: auth.uid(), auth.jwt(), auth.role() must be declarable as externally-managed in the Supabase contract so the verifier can resolve them in RLS predicates. Added FunctionDeclaration IR sketch and pg_proc verification note to posture.md. - Promote example app (examples/supabase/) to must-have deliverable: the runnable end-to-end proof is the primary onboarding artifact. - Add triggers/functions as stretch goal: the canonical "create profile on signup" pattern benefits from first-class IR but is not required for v0.1 (raw SQL migrations are the fallback). - Collapse runtime to single supabase() facade (no baseDb): one factory, one returned object; asUser/asAnon/asServiceRole required before querying. Corrected serviceRoleKey to jwtSecret. - Clarify RPC is not a gap (parity with status quo); CREATE EXTENSION already handled by existing pgvector extension.
…ct refs
After discussing PSL and TS authoring with the architect and PE, this records the settled positions across both projects so implementers see one consistent design.
target-extensible-ir:
- Raise the PSL grammar into "At a glance" with a concrete example
- PSL: top-level `namespace { ... }` blocks; reopenable; namespaces do not recursively nest; backward-compatible default to `__unspecified__`
- Cross-namespace FKs: dot-qualified type references in `@relation`; no new attribute
- TS: zero new syntax for cross-namespace FKs (the model handle carries the namespace coordinate; existing `constraints.foreignKey` / `rel.belongsTo` call sites Just Work)
- Correct FR16: Postgres contracts default to `__unspecified__` (search_path resolves to `public` at the database level), not an explicit `public` namespace
- Tighten terminology: drop "nested namespace" framing, prefer "top-level namespace block"; lists recursive nesting as an explicit non-goal
supabase-integration:
- PSL: colon-prefixed dot-qualified cross-contract refs (`supabase:auth.User` in the type position); no `extension ... from "..."` directive in v0.1
- TS: unified surface — drop `refIn`. Cross-contract is inferred from the model handle’s brand; same `constraints.foreignKey` / `rel.belongsTo` call shape as local refs
- Implicit resolution via `extensionPacks`; commit publicly that any future `use ... as` form is purely additive aliasing
- Contract-space dependency graph: directional, acyclic, inferred from `extensionPacks`; reverse references are load-time errors
- Namespace ownership: namespaces are open for extension, primitives owned by their declaring contract; cross-contract name collisions are fail-fast load errors
- `__unspecified__` × cross-contract: emit unqualified REFERENCES, let `search_path` resolve at migration time (per-tenant FKs anchor by OID at constraint creation)
- Companion docs (overview, extension-package, developer-experience, README) updated to drop `refIn` and use the real `defineContract` DSL shape
…tion Adds projects/supabase-integration/example/ — a design-time TypeScript sketch of the runnable example that will eventually live at examples/supabase/. The code does not typecheck today; it is written against the design as it stands in the surrounding notes. Purpose: - Forcing function. Writing the example against the design surfaces concrete design questions that topic-by-topic conversations would not have surfaced. Twenty are captured in design-holes.md, 10 of them marked blocking before this project can move to a spec. - Spec anchor. AC1 for the eventual project spec becomes "this sketch typechecks against the live framework after milestones X / Y / Z ship." - Onboarding artifact. At project close-out the directory migrates to examples/supabase/ and becomes the canonical onboarding sample. Files: - src/prisma/contract.ts — namespaces, cross-contract FK to auth.User, within-namespace FK Profile→Post, RLS policies on both tables, references auth.uid() as an externally-managed function. - src/prisma/db.ts — supabase() facade with jwtSecret + middleware composition. - src/handlers.ts — request handlers exercising asUser(jwt), asAnon(), asServiceRole(), plus multi-statement flows that force the transaction scoping story. - migrations/supabase/contract.json + contract.d.ts — pinned mirror of the Supabase extension contract with externally-managed functions. - prisma-next.config.ts — extensionPacks wiring. - design-holes.md — every concrete decision the design did not yet cover, grouped by file and severity-tagged. The blocking holes are the canonical list of what the project spec must settle before execution begins. The example will move into the project plan as the close-out deliverable and the design-holes list becomes input for the next round of design work.
Two repo-level conventions that surfaced during Supabase-integration shaping: - Drive skills (drive-discussion, drive-create-spec, drive-create-plan) now require codebase research before opening a discussion or drafting an artifact, instead of deferring to the user as the source of truth on current implementation state. - New rule .agents/rules/explicit-opt-in-over-diagnostics.mdc codifies the policy of preferring explicit user opt-in over noisy diagnostics on intentional paths (e.g. cross-contract FK onDelete: cascade).
Captures the design pass driven by the example-app forcing function: - decisions.md: canonical decision log (A1-A7 TS authoring, B1-B6 PSL authoring, C1-C8 cross-cutting). Records the closes on holes #1, #2, #3, #4, #5, #15, #16, #17 and the elevation of roles/posture/extension structure to first-class concepts. - design-holes.md: marks resolved holes against decisions, updates the blocking-holes summary table. - rls.md: rewritten to the .rls([...]) array form (TS) and top-level policy <name> { key = value } blocks (PSL), with ref() helper semantics documented. - overview.md: refreshed at-a-glance snippet, capability bullets, stretch goals to align with the array-of-descriptors RLS surface and the out-of-scope-for-v0.1 framing of functions-as-IR. - example/src/prisma/contract.ts: rewritten to exercise the settled surface (composite uniques in .attributes(), .rls([...]) descriptor array, ref() helper for cross-model predicates, no capability flag). - extension-package.md, posture.md, README.md: cross-reference updates pointing at decisions.md and reflecting the role IR + subpath entrypoint decisions.
The (namespace.id, name) FK reference shape introduced by this project is designed to extend additively to (spaceId, namespace.id, name) when cross-contract refs land in a follow-up. Calls this out in FR16b and in the Non-goals bullet so implementers do not fuse namespace.id + name into a single composite key or bake single-contract-space assumptions into the FK reference walk.
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/2-sql/3-tooling/emitter/test/emitter-hook.generation.basic.test.ts (1)
954-957:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFix
generateContractDtsargument placement in the execution-hash testLine 954 is still using the old extra-array position, so hashes are not being passed in the same position as the rest of this file’s updated calls. This can make the test validate the wrong invocation path.
Suggested patch
- const types = generateContractDts(ir, sqlEmission, [], [], { - ...testHashes, - executionHash: 'test-exec-hash', - }); + const types = generateContractDts(ir, sqlEmission, [], { + ...testHashes, + executionHash: 'test-exec-hash', + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/3-tooling/emitter/test/emitter-hook.generation.basic.test.ts` around lines 954 - 957, The test call to generateContractDts is passing hashes in the old extra-array position; change the invocation so the hashes object is the sixth argument (not the fifth) to match other calls—i.e., call generateContractDts(ir, sqlEmission, [], [], {}, { ...testHashes, executionHash: 'test-exec-hash' }) so the fifth argument is an empty options object and the sixth carries the hashes (update the call in the execution-hash test referencing generateContractDts).packages/2-sql/5-runtime/test/sql-family-adapter.test.ts (1)
26-48:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winDelegation test is currently tautological.
Line 38–48 only verifies a constant return value; it does not prove
markerReader.readMarkeractually forwards to the profile method with the provided queryable.As per coding guidelines: "Avoid tautological tests that only restate fixture input - tests must verify behavior, not mirror object shape passed by the test itself."Suggested test hardening
-import { describe, expect, it } from 'vitest'; +import { describe, expect, it, vi } from 'vitest'; @@ - it('delegates readMarker to adapter profile', async () => { - const adapter = new SqlFamilyAdapter(testContract, testProfile); + it('delegates readMarker to adapter profile', async () => { + const readMarker = vi.fn(async () => ({ kind: 'absent' as const })); + const adapter = new SqlFamilyAdapter(testContract, { ...testProfile, readMarker }); @@ const result = await adapter.markerReader.readMarker(fakeQueryable); + expect(readMarker).toHaveBeenCalledWith(fakeQueryable); expect(result).toEqual({ kind: 'absent' }); });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/5-runtime/test/sql-family-adapter.test.ts` around lines 26 - 48, The test is tautological because it only asserts the fixture return; modify it so markerReader.readMarker actually delegates to the profile's implementation: replace the current testProfile or stub its readMarker with a spy/fixture that records the queryable argument and returns a distinct value (e.g., { kind: 'some' }) and then assert that calling adapter.markerReader.readMarker(fakeQueryable) returns that distinct value and that the profile.readMarker spy was invoked with the exact fakeQueryable; reference the SqlFamilyAdapter constructor, adapter.markerReader.readMarker, and the testProfile/readMarker stub to locate where to change the test.packages/2-sql/5-runtime/src/sql-runtime.ts (1)
404-411:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winSet startup verification state in the optional-missing marker branch.
On Line 409,
verifiedis set butstartupVerifiedremainsfalse. Inmode: 'startup', that re-runsverifyMarker()on every execute instead of one startup check.Suggested fix
if (readResult.kind !== 'present') { if (this.verify.requireMarker) { throw runtimeError('CONTRACT.MARKER_MISSING', 'Contract marker not found in database'); } this.verified = true; + this.startupVerified = true; return; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/5-runtime/src/sql-runtime.ts` around lines 404 - 411, The optional-missing marker branch sets this.verified but forgets to set this.startupVerified, causing verifyMarker() to re-run in mode: 'startup'; update the non-present branch inside the readResult.kind !== 'present' path (where verify.requireMarker is false) to also set this.startupVerified = true (in the same place you set this.verified) so startup mode treats the check as completed and won't re-run on every execute; refer to readResult, verify.requireMarker, this.verified, this.startupVerified, and verifyMarker()/execute to locate and adjust the logic.packages/1-framework/3-tooling/cli/src/utils/formatters/migrations.ts (1)
11-12:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReplace transient spec/milestone references in inline comments.
These comments reference transient artifacts (
spec OQ-4,M6 sub-spec,AC4/AC5). Please rephrase them to stable behavioral intent.As per coding guidelines, source-code comments must not reference transient project artifacts such as sub-spec sections or milestone acceptance criteria.
Also applies to: 91-92, 168-169, 404-405, 419-420
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/utils/formatters/migrations.ts` around lines 11 - 12, Several inline comments in migrations.ts reference transient artifacts (e.g., "spec OQ-4", "M6 sub-spec", "AC4/AC5"); update those comments to state the stable behavioral intent instead (for example: describe the exact rendering behavior, expected output format, or compatibility rationale). Locate the commented lines mentioned (the block referencing legacy string[] renderer, and the other occurrences around the indexes you saw) and replace the transient labels with clear, version-agnostic descriptions of what the code must do or why the special-case exists (e.g., "render byte-for-byte for legacy string[] renderer", "preserve verbatim rendering for mongodb-shell", "ensure backwards-compatible output formatting"), keeping the comment content focused on behavior rather than milestones or acceptance criteria.
🟡 Minor comments (17)
docs/architecture docs/patterns/spi-at-lowest-consuming-layer.md-27-44 (1)
27-44:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAdd a language identifier to the fenced diagram block.
This code fence is missing a language tag and triggers MD040 in markdownlint.
Suggested patch
-``` +```text ┌────────────────────────────────────────────────────┐ │ tooling / orchestration layer (the caller) │ │ imports the SPI; calls it; knows nothing about │ @@ └────────────────────────────────────────────────────┘</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/spi-at-lowest-consuming-layer.md around
lines 27 - 44, The fenced diagram block in spi-at-lowest-consuming-layer.md is
missing a language tag causing markdownlint MD040; update the triple-backtick
that opens the ASCII diagram to include a language identifier (e.g.,text) so the block becomestext ... ```; locate the ASCII diagram block shown in the
diff and add the language tag to the opening fence only.</details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/json-canonical-class-in-memory.md-28-42 (1)</summary><blockquote> `28-42`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Add a language identifier to the fenced structure diagram.** This fence currently triggers MD040 (fenced code blocks should specify a language). <details> <summary>Suggested patch</summary> ```diff -``` +```text authoring apply / consume │ ▲ ▼ │ @@ polymorphic reviewable, replayable dispatch across processes ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/json-canonical-class-in-memory.md around
lines 28 - 42, The fenced diagram block uses a plainfence without a language tag (triggering MD040); update the opening fence to include a language identifier such as "text" (e.g., changeto ```text) for the ASCII-art
diagram in the fenced structure diagram so the linter recognizes the language
and MD040 is satisfied; locate the triple-backtick fence surrounding the ASCII
diagram and add the language token to the opening fence (leave the closing fence
unchanged).</details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/json-canonical-class-in-memory.md-49-53 (1)</summary><blockquote> `49-53`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Reference links are accidentally code-formatted, so they won’t render as links.** The `Path` cells wrap the entire markdown link in backticks (`` `[...](...)` ``), which turns them into code literals. <details> <summary>Suggested patch</summary> ```diff -| Migration `ops.json` (Mongo) | `[packages/3-mongo-target/1-mongo-target/src/core/op-factory-call.ts](../../../packages/3-mongo-target/1-mongo-target/src/core/op-factory-call.ts)` | `OpFactoryCall` classes serialise via `JSON.stringify` to `ops.json`; the runner rehydrates and walks the same class hierarchy at apply time. | -| Migration `ops.json` (Postgres) | `[packages/3-targets/3-targets/postgres/src/core/migrations/op-factory-call.ts](../../../packages/3-targets/3-targets/postgres/src/core/migrations/op-factory-call.ts)` | Same shape on the SQL side; demonstrates the pattern is target-agnostic. | -| Mongo wire commands | `[packages/2-mongo-family/6-transport/mongo-wire/src/wire-commands.ts](../../../packages/2-mongo-family/6-transport/mongo-wire/src/wire-commands.ts)` | Wire commands round-trip natively because MongoDB commands *are* JSON; the canonical example of the pattern's "JSON is the contract" property. | +| Migration `ops.json` (Mongo) | [`packages/3-mongo-target/1-mongo-target/src/core/op-factory-call.ts`](../../../packages/3-mongo-target/1-mongo-target/src/core/op-factory-call.ts) | `OpFactoryCall` classes serialise via `JSON.stringify` to `ops.json`; the runner rehydrates and walks the same class hierarchy at apply time. | +| Migration `ops.json` (Postgres) | [`packages/3-targets/3-targets/postgres/src/core/migrations/op-factory-call.ts`](../../../packages/3-targets/3-targets/postgres/src/core/migrations/op-factory-call.ts) | Same shape on the SQL side; demonstrates the pattern is target-agnostic. | +| Mongo wire commands | [`packages/2-mongo-family/6-transport/mongo-wire/src/wire-commands.ts`](../../../packages/2-mongo-family/6-transport/mongo-wire/src/wire-commands.ts) | Wire commands round-trip natively because MongoDB commands *are* JSON; the canonical example of the pattern's "JSON is the contract" property. | ``` </details> As per coding guidelines, "Keep docs current (READMEs, rules, links) (see `.cursor/rules/doc-maintenance.mdc`)." <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/architecture` docs/patterns/json-canonical-class-in-memory.md around lines 49 - 53, The table in docs/architecture docs/patterns/json-canonical-class-in-memory.md has its Path cells formatted as code literals (backticked) so links won't render; edit the rows such as "Migration `ops.json` (Mongo)", "Migration `ops.json` (Postgres)" and "Mongo wire commands" and remove the surrounding backticks around the markdown links (e.g. change `[...](...)` wrapped with backticks to plain `[...](...)`) so the links render as clickable links while keeping the visible text and target unchanged. ``` </details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/capability-gating.md-27-44 (1)</summary><blockquote> `27-44`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Add a language identifier on the fenced example block.** At Line 27, add a language tag (for example `ts`) to clear MD040. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/capability-gating.md around lines 27 - 44,
The fenced code block containing the example of capability objects and the
consumption-site check (showing symbols like contract.capabilities,
adapter.profile.capabilities, lowerWithLateral, lowerWithCorrelatedSubquery)
needs a language tag to satisfy MD040; update the opening triple-backtick to
include a language identifier such as ts (e.g., changetots) so the
block is marked as TypeScript.</details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/frozen-class-ast.md-28-53 (1)</summary><blockquote> `28-53`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Specify the code-fence language to satisfy markdown linting.** At Line 28, add a language token (`ts`) so MD040 passes. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/frozen-class-ast.md around lines 28 - 53,
The code fence at the top of the snippet is missing a language token which
causes MD040; update the opening triple-backtick to include the TypeScript token
(i.e., changetots) so the fenced block containing the FooAstNode,
FooLiteral, FooVisitor, accept and rewrite examples is treated as TypeScript for
linting.</details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/package-layering.md-35-49 (1)</summary><blockquote> `35-49`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Add a language identifier to the fenced block.** At Line 35, the code fence is missing a language tag, which will keep `markdownlint` warning MD040 active. <details> <summary>Suggested patch</summary> ```diff -``` +```text packages/ ├── 1-framework/ # Framework domain (target-agnostic) │ ├── 1-core/ # Lowest layer; foundational types @@ ├── 3-mongo-target/ # Mongo target └── 3-extensions/ # Extension packs (pgvector, sql-orm-client, …) ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/package-layering.md around lines 35 - 49,
The fenced code block that starts with the "packages/" tree is missing a
language tag which triggers markdownlint MD040; update the opening
triple-backtick for that block to include a language identifier (e.g., change
totext) so the snippet is treated as plain text; locate the block that
begins with "packages/" and modify only the opening fence to add the language
identifier.</details> </blockquote></details> <details> <summary>docs/architecture docs/patterns/three-layer-polymorphic-ir.md-29-52 (1)</summary><blockquote> `29-52`: _⚠️ Potential issue_ | _🟡 Minor_ | _⚡ Quick win_ **Add a language tag to the fenced diagram block.** Line 29 uses a plain fenced block; `markdownlint` (`MD040`) expects a language identifier, and this can break lint gates. <details> <summary>Suggested fix</summary> ```diff -``` +```text ┌─────────────────────────────────────────────────────────────┐ │ framework layer (target-agnostic) │ ... └─────────────────────────────────────────────────────────────┘ ``` ``` </details> As per coding guidelines, keep docs current and passing repository documentation standards. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@docs/architecturedocs/patterns/three-layer-polymorphic-ir.md around lines
29 - 52, The fenced ASCII diagram block starting with the "framework layer
(target-agnostic)" text is missing a language identifier which triggers
markdownlint MD040; fix it by adding a language tag (e.g., "text") to the
opening triple-backtick fence for that diagram block so the block becomesexamples/paradedb-demo/docker-compose.yaml-3-3 (1)
3-3:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPin ParadeDB image to an immutable tag (or digest).
Using
:latestmakes local setup non-reproducible and can break example behavior unexpectedly when upstream updates.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/paradedb-demo/docker-compose.yaml` at line 3, Replace the mutable image tag "paradedb/paradedb:latest" with an immutable reference to make the example reproducible; update the Docker Compose service image (the line containing image: paradedb/paradedb:latest) to a specific version tag like paradedb/paradedb:vX.Y.Z or a content-addressable digest form paradedb/paradedb@sha256:<digest>, and document or pin the chosen tag/digest in the example README so future runs use the exact same image.packages/1-framework/3-tooling/migration/test/invariants.test.ts-62-66 (1)
62-66:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove transient sub-spec references from this inline comment.
Line 66 references
sub-spec § 3, which violates the repo’s source-comment policy for transient artifacts. Reword this to describe behavior directly without spec-section pointers.As per coding guidelines: "Source-code comments must not reference transient project artifacts ... and sub-spec section references (e.g.,
§ 3.2)."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/migration/test/invariants.test.ts` around lines 62 - 66, Update the inline comment in invariants.test.ts (the block that mentions Cipherstash's `installEqlBundle` and structural `create-*` ops) to remove the transient reference "sub-spec § 3 / cipherstash-migration.spec.md" and instead describe the behavior directly: state that those operations are the canonical examples of additive operations that include invariantIds for marker bookkeeping and replay tracking, and clarify the distinction between operationClass (policy gating) and invariantId (replay tracking).packages/1-framework/3-tooling/cli/test/commands/migration-show.test.ts-335-338 (1)
335-338:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove transient milestone/task identifiers from the inline comment.
Please avoid
T6.6 / AC8references in source comments and keep only behavior-focused wording.Proposed edit
- // M6 (T6.6 / AC8): operationClass tags no longer inlined in the + // Operation-class tags are no longer inlined in the // human-readable line. Destructive ops still render a "(destructive)" // marker (replaces the old "[destructive]" tag); additive/widening/ // mutative/data render bare.As per coding guidelines: "Source-code comments must not reference transient project artifacts including ... milestone-task IDs ... [and] milestone-named acceptance criteria."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/test/commands/migration-show.test.ts` around lines 335 - 338, Remove the transient milestone/task identifiers "T6.6 / AC8" from the inline comment that begins with "M6 (T6.6 / AC8): operationClass tags no longer inlined..." in migration-show.test.ts and rewrite it to be behavior-focused (e.g., "M6: operationClass tags no longer inlined in the human-readable line. Destructive ops still render a '(destructive)' marker; additive/widening/mutative/data render bare."). Ensure only the transient IDs are removed and the comment still documents the expected behavior.examples/paradedb-demo/prisma-next.config.ts-19-20 (1)
19-20:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove lint suppression and handle missing
DATABASE_URLexplicitly.This bypasses the lint policy and hides a clearer config error path. Prefer resolving the env var once and throwing a direct error if absent, then pass a non-null value to
connection.As per coding guidelines: “Never suppress biome lints.”
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/paradedb-demo/prisma-next.config.ts` around lines 19 - 20, Remove the biome lint suppression and stop using the non-null assertion on process.env['DATABASE_URL']!; instead, read process.env['DATABASE_URL'] into a local constant (e.g., const databaseUrl = process.env['DATABASE_URL']), verify it is defined and throw a clear error if missing, then pass that validated non-null value into the connection field (replace the use of process.env['DATABASE_URL']! in the connection setting). Ensure the thrown error message clearly states that DATABASE_URL is required.packages/1-framework/3-tooling/migration/test/materialise-extension-migration-package-if-missing.test.ts-42-42 (1)
42-42:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove transient acceptance-criteria IDs from the comment.
Please reword this line to describe the invariant directly without
AC-7 / AM12identifiers.As per coding guidelines: “Source-code comments must not reference transient project artifacts including milestone acceptance criteria.”
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/migration/test/materialise-extension-migration-package-if-missing.test.ts` at line 42, Update the test title string in the it(...) call so it no longer references transient IDs; replace "leaves on-disk content byte-identical when the dir already exists (AC-7 / AM12)" with a direct invariant description such as "leaves on-disk content byte-identical when the directory already exists" in the test defined by the it(...) invocation to remove the acceptance-criteria identifiers.packages/1-framework/3-tooling/cli/test/output.migration-commands.test.ts-22-24 (1)
22-24:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAvoid transient milestone/spec references in test comments.
Please rewrite these comments to describe behavior directly (without
M6,AC*, orsub-specreferences), per repo doc-maintenance rules.As per coding guidelines: “Source-code comments must not reference transient project artifacts including milestone-task IDs, milestone acceptance criteria, and sub-spec references.”
Also applies to: 57-57, 64-64
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/test/output.migration-commands.test.ts` around lines 22 - 24, The comment above the assertion expect(stripped).toContain('Next: prisma-next migration status') (and similar comments at the other noted locations) includes transient milestone/sub-spec references (e.g., "M6 sub-spec § Output shape § footer"); replace those comments with a concise description of the observable behavior being tested — for example: state that a no-op apply still shows the canonical next-step hint so the user knows where to verify state — and remove any mentions of milestone IDs, acceptance criteria tags, or sub-spec references; update the comment text near the assert in output.migration-commands.test.ts accordingly.packages/3-extensions/cipherstash/DEVELOPING.md-57-58 (1)
57-58:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse consistent
Arktypecapitalization in docs.Line 58 uses
arktype; the rest of the repo/docs useArktype. Aligning wording improves grep/search consistency.✏️ Suggested doc fix
-- `RuntimeParameterizedCodecDescriptor<{ equality, freeTextSearch }>` - with arktype `paramsSchema` validated at the contract boundary. +- `RuntimeParameterizedCodecDescriptor<{ equality, freeTextSearch }>` + with Arktype `paramsSchema` validated at the contract boundary.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/3-extensions/cipherstash/DEVELOPING.md` around lines 57 - 58, Change the lowercase "arktype" to the project-standard "Arktype" in the docs where `RuntimeParameterizedCodecDescriptor<{ equality, freeTextSearch }>` and `paramsSchema` are described so the wording is consistent; update the phrase on the line that currently reads "arktype `paramsSchema`" to "Arktype `paramsSchema`" to match other docs and repo grep conventions.examples/multi-extension-monorepo/test/multi-space.e2e.integration.test.ts-1-47 (1)
1-47:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReplace milestone/tracking IDs in comments with durable behavior wording.
The block comment currently includes transient identifiers (
T4.4,TC-8,AC5,NFR6). Please keep only stable behavior descriptions.As per coding guidelines, “Source-code comments must not reference transient project artifacts including milestone-task IDs and milestone acceptance criteria codes.”
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/multi-extension-monorepo/test/multi-space.e2e.integration.test.ts` around lines 1 - 47, The top block comment in the test (the multi-extension monorepo E2E description and the "Layers of coverage" numbered list) contains transient milestone/tracking IDs (e.g., T4.4, TC-8, AC5, NFR6); remove those IDs and rewrite any reference to them as stable behavioral descriptions only, preserving the existing numbered coverage points and examples (the "Pinned per-space artefacts on disk", "Multi-space planning", "Multi-space apply", and "Order-independent" paragraphs) so the comment documents durable behavior rather than project task identifiers.packages/1-framework/3-tooling/cli/test/output.db-update.test.ts-82-83 (1)
82-83:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove transient milestone/acceptance references from inline comments.
These comments encode temporary tracking labels (
M6,T6.6,AC*,spec/OQ) instead of durable behavior intent. Please rewrite them as plain behavior-focused notes.As per coding guidelines, “Source-code comments must not reference transient project artifacts including milestone-task IDs and milestone acceptance criteria codes.”
Also applies to: 198-202, 300-303, 385-390
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/test/output.db-update.test.ts` around lines 82 - 83, Replace transient project-tracking labels in inline comments (e.g. the comment starting "// M6 (T6.6 / AC8): inline operationClass tags removed; destructive ops keep a "(destructive)" marker on the same line.") with plain, behavior-focused notes that state the intent (for example: "Inline operationClass tags are removed; destructive operations retain a '(destructive)' marker on the same line."). Apply the same change to the other commented blocks referenced (lines ~198-202, ~300-303, ~385-390), removing milestone/task/AC/spec codes and keeping only durable descriptions of behavior or requirements.packages/1-framework/3-tooling/cli/src/control-api/operations/migration-apply.ts-73-74 (1)
73-74:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove transient sub-spec references from source comments.
Comments now include
Sub-spec § ...references; these are transient artifacts and should be rewritten as stable behavior descriptions.As per coding guidelines, source-code comments must not reference transient artifacts such as sub-spec section references.
Also applies to: 113-114
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/control-api/operations/migration-apply.ts` around lines 73 - 74, Remove the transient "Sub-spec § ..." reference in the comment containing "Sub-spec § `--ref <hash>` semantics under multi-space." and rewrite it as a stable, descriptive sentence that explains the actual behavior of the --ref option (e.g., that --ref accepts a migration hash to target a specific migration across multiple spaces and how conflicts are resolved), and make the same change for the other comment at the block referenced by lines 113-114; update the comment text near the migration-apply module (the comment containing "`--ref <hash>` semantics under multi-space") so it documents concrete behavior rather than a sub-spec pointer.
🧹 Nitpick comments (13)
.agents/rules/explicit-opt-in-over-diagnostics.mdc (1)
3-47: ⚡ Quick winTrim or de-scope this as an always-applied rule.
Given
alwaysApply: true, this is too long for an always-on rulecard. Keep a short always-on directive and move examples/rationale to a non-always rule (or docs), or setalwaysApply: falsewith scopedglobsif this is domain-specific.Proposed frontmatter-level adjustment
--- description: Prefer requiring explicit opt-in over emitting diagnostics on intentional user paths -alwaysApply: true +alwaysApply: false +globs: + - "**/*.{ts,tsx,js,md,mdc}" ---Based on learnings: "Always-apply rules must be short, globally relevant, and stable; rules applying only to specific areas (SQL, CLI, tests) should set
alwaysApply: falseand useglobsin frontmatter".🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.agents/rules/explicit-opt-in-over-diagnostics.mdc around lines 3 - 47, The rule is too long and domain-specific to remain always-on; update the frontmatter by either (A) trimming this card to a short, globally-relevant directive and moving examples/rationale into a non-always rule or external docs, or (B) change alwaysApply: true to alwaysApply: false and add targeted globs so it only runs for relevant files (e.g., SQL/CLI/test paths); locate the top-level frontmatter key alwaysApply and the rule body titled "Explicit opt-in over noisy diagnostics" to make the change and move or reduce the examples accordingly..agents/skills/drive-create-spec/SKILL.md (1)
104-104: 💤 Low valueConsider consistent spelling: use "implementer" instead of "implementor".
Lines 104 and 112 use "implementor". The rest of the codebase consistently uses "implementer" (found in projects/oss-setup/spec.md and projects/extension-contract-spaces/spec.md). Standardizing on "implementer" aligns with the established codebase convention and modern technical writing usage.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.agents/skills/drive-create-spec/SKILL.md at line 104, Replace the nonstandard spelling "implementor" with the codebase-preferred "implementer" wherever it appears in the SKILL.md requirement wording (the requirement sentences that currently read "implementor"). Update both instances reported (the two requirement lines) and run a quick grep/search for any other occurrences to ensure consistency with projects/oss-setup/spec.md and projects/extension-contract-spaces/spec.md; no behavioral changes required, just textual replacement.packages/2-sql/2-authoring/contract-psl/src/psl-attribute-parsing.ts (1)
209-243: 💤 Low valueEscape handling covers common cases.
The escape detection logic (line 219:
body[index - 1] !== '\\') handles simple escaped quotes but wouldn't correctly handle edge cases like escaped backslashes followed by quotes ("value\\"). For V1 PSL string options this is acceptable, as the diagnostic message indicates this is a limited surface. The actual quote parsing/unescaping is delegated toparseQuotedStringLiteralwhich likely handles these cases properly.💡 Future consideration for more robust escape handling
For completeness, counting consecutive backslashes would handle all escape sequences:
function splitObjectLiteralEntries(body: string): readonly string[] { // ... existing setup ... for (let index = 0; index < body.length; index += 1) { const ch = body[index] ?? ''; if (quote) { - if (ch === quote && body[index - 1] !== '\\') { + // Count consecutive backslashes before quote + let backslashCount = 0; + for (let j = index - 1; j >= 0 && body[j] === '\\'; j--) backslashCount++; + if (ch === quote && backslashCount % 2 === 0) { quote = null; }Not needed for V1, but worth considering if PSL object literals expand beyond simple string options.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/2-authoring/contract-psl/src/psl-attribute-parsing.ts` around lines 209 - 243, splitObjectLiteralEntries' quote-termination logic incorrectly treats a quote as unescaped if the immediately preceding char is not a single backslash; update the check inside the if (quote) block so you count consecutive backslashes immediately before body[index] and only treat the quote as closing when that count is even (i.e., not escaped). Modify the logic in splitObjectLiteralEntries (the quote handling around body[index - 1] !== '\\') to walk backward from index-1, count backslashes, and use count % 2 === 0 to decide whether to clear quote, leaving the rest of the function intact.packages/2-sql/4-lanes/relational-core/src/plan.ts (1)
24-40: ⚡ Quick winRemove manual line wrapping from JSDoc.
The JSDoc comment spans multiple paragraphs with manual ~80-column line breaks. Per coding guidelines, JSDoc prose should avoid manual wrapping.
📝 Proposed fix
/** - * Wraps an `AnyQueryAst` (typically a `RawSqlExpr` constructed package-internally - * by an extension's migration factory) in a fully-populated `SqlQueryPlan` - * whose `meta` is sourced from the supplied contract. + * Wraps an `AnyQueryAst` (typically a `RawSqlExpr` constructed package-internally by an extension's migration factory) in a fully-populated `SqlQueryPlan` whose `meta` is sourced from the supplied contract. * - * Centralising the envelope here means consumers (cipherstash migration - * factories today; future raw-sql callers) cannot drift on `storageHash` / - * `target` / `targetFamily`, which would otherwise surface as a subtle - * `assertContractMatches` failure inside `dataTransform`. `params` defaults - * to `[]` because parameters embedded in the AST as `ParamRef`s are resolved - * at lowering time (`encodeParams` walks `plan.ast.collectParamRefs()`), - * not at plan-construction time. + * Centralising the envelope here means consumers (cipherstash migration factories today; future raw-sql callers) cannot drift on `storageHash` / `target` / `targetFamily`, which would otherwise surface as a subtle `assertContractMatches` failure inside `dataTransform`. `params` defaults to `[]` because parameters embedded in the AST as `ParamRef`s are resolved at lowering time (`encodeParams` walks `plan.ast.collectParamRefs()`), not at plan-construction time. * - * The default `laneId` of `'raw'` reflects raw-SQL plans' standard lane tag; - * callers (e.g. a future `sql-raw-factory`) may override to differentiate - * the plan's provenance. + * The default `laneId` of `'raw'` reflects raw-SQL plans' standard lane tag; callers (e.g. a future `sql-raw-factory`) may override to differentiate the plan's provenance. */🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/4-lanes/relational-core/src/plan.ts` around lines 24 - 40, The JSDoc above the SqlQueryPlan wrapper contains manual ~80-column line breaks and multi-paragraph wrapped text; reflow the comment into normal, unwrapped prose (single or multiple paragraphs as appropriate) without manual line breaks, preserving all original content and references to AnyQueryAst, RawSqlExpr, SqlQueryPlan, params, storageHash/target/targetFamily, encodeParams, and the default laneId 'raw' so the documentation reads naturally and adheres to the project's JSDoc guidelines.packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts (1)
12-14: ⚡ Quick winRemove manual line wrapping from JSDoc.
The JSDoc comment uses manual ~80-column line wraps. Per coding guidelines, JSDoc prose should avoid manual line breaks and allow the formatter/viewer to handle wrapping.
📝 Proposed fix
-/** - * Outcome of an adapter's marker read. `no-table` means the marker storage itself is absent (e.g. attaching to a database that was never `db init`'d); `absent` means the storage exists but holds no row for the requested space; `present` carries the parsed record. Callers distinguish these so runtime telemetry and `requireMarker` policy can react to each case appropriately. - */ +/** Outcome of an adapter's marker read. `no-table` means the marker storage itself is absent (e.g. attaching to a database that was never `db init`'d); `absent` means the storage exists but holds no row for the requested space; `present` carries the parsed record. Callers distinguish these so runtime telemetry and `requireMarker` policy can react to each case appropriately. */🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/4-lanes/relational-core/src/ast/adapter-types.ts` around lines 12 - 14, The JSDoc above the adapter marker read outcome is manually wrapped to ~80 columns; remove the manual line breaks so the comment becomes a single unbroken paragraph (no hard line wraps) describing the three statuses (`no-table`, `absent`, `present`) and their semantics; update the comment block located above the adapter marker read type/definition in adapter-types.ts (the JSDoc that begins "Outcome of an adapter's marker read") so formatting is natural prose with no manual line wrapping.packages/2-sql/5-runtime/src/codecs/alias-resolver.ts (1)
24-26: ⚡ Quick winRemove manual line wrapping from inline comment.
The inline comment uses manual line breaks. Per coding guidelines, comments should avoid manual ~80-column wrapping.
📝 Proposed fix
} else if (ast.kind === 'raw-sql') { - // Raw-SQL ASTs do not bind a single primary table — alias resolution - // is driven by inline `IdentifierRef`s the caller embedded directly. + // Raw-SQL ASTs do not bind a single primary table — alias resolution is driven by inline `IdentifierRef`s the caller embedded directly. } else {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/5-runtime/src/codecs/alias-resolver.ts` around lines 24 - 26, The inline comment inside the else if (ast.kind === 'raw-sql') branch contains manual line breaks; replace it with a single unwrapped comment line (or let the editor handle wrapping) so it reads continuously (e.g., "// Raw-SQL ASTs do not bind a single primary table — alias resolution is driven by inline `IdentifierRef`s the caller embedded directly.") while keeping the same wording and meaning.packages/2-sql/2-authoring/contract-ts/src/contract-dsl.ts (1)
699-706: ⚡ Quick winUse
ifDefined()for new conditional spreads inindex().Line 702–706 uses inline spread conditionals for
name,type, andoptions; this should follow the project’sifDefined()pattern for consistency.♻️ Suggested refactor
return { kind: 'index', fields: normalizeFieldRefInput(fields), - ...(options?.name !== undefined ? { name: options.name } : {}), - ...(options?.type !== undefined ? { type: options.type } : {}), - ...(options?.options !== undefined - ? { options: options.options as Record<string, unknown> } - : {}), + ...ifDefined('name', options?.name), + ...ifDefined('type', options?.type), + ...ifDefined('options', options?.options as Record<string, unknown>), };As per coding guidelines, "
{packages,examples,test}/**/*.{ts,tsx}: UseifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/2-sql/2-authoring/contract-ts/src/contract-dsl.ts` around lines 699 - 706, The index() return object uses inline conditional spreads for name, type, and options; replace those with calls to ifDefined(...) from '@prisma-next/utils/defined' to match project pattern. Import ifDefined at the top if missing, then change the object to include ifDefined('name', options?.name), ifDefined('type', options?.type), and ifDefined('options', options?.options as Record<string, unknown>) while keeping the existing kind and fields (normalizeFieldRefInput(fields)) intact. Ensure types remain the same and remove the inline ...(condition ? { key: value } : {}) spreads.packages/1-framework/3-tooling/cli/test/utils/extension-pack-inputs.test.ts (1)
25-31: 💤 Low valuePrefer double-cast pattern for test stub objects.
The empty-cast stubs use direct casts (
{} as MigrationMetadata), but the test-mocking guideline recommends double casts (as unknown as X) for clarity and safety at mock boundaries.♻️ Refactor to use double-cast pattern
-const STUB_METADATA = {} as MigrationMetadata; -const STUB_OPS = {} as MigrationOps; +const STUB_METADATA = {} as unknown as MigrationMetadata; +const STUB_OPS = {} as unknown as MigrationOps; const migrationPkg: DescriptorMigrationPackage = {As per coding guidelines, use double casts (
as unknown as X) for mocks and dynamic proxies in tests instead of direct casts.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/test/utils/extension-pack-inputs.test.ts` around lines 25 - 31, The test uses direct casts for stub objects (STUB_METADATA, STUB_OPS) which should be replaced with the double-cast pattern; update the declarations so the empty object is cast as unknown first and then to the target type (e.g., STUB_METADATA and STUB_OPS should be created as {} as unknown as MigrationMetadata and {} as unknown as MigrationOps) and ensure any uses in migrationPkg remain unchanged.packages/1-framework/3-tooling/cli/src/control-api/operations/apply-aggregate.ts (1)
206-211: ⚡ Quick winDrop transient spec references in implementation comments.
Line 210 references
M6 sub-spec § Required changes 1; please replace it with a behavior-focused comment only.As per coding guidelines, source-code comments must not reference transient project artifacts such as sub-spec section pointers.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/control-api/operations/apply-aggregate.ts` around lines 206 - 211, Update the implementation comment above the function that projects planner per-space resolutions into AggregatePerSpaceExecutionEntry[] (the comment that references "M6 sub-spec § Required changes 1") to remove the transient sub-spec pointer and instead describe the intended behavior: explain that for apply-mode (includeMarkers === true) the function should include marker-adjusted entries per space, and for non-apply modes it should omit markers and only emit executed resolution data; mention the relevant symbols AggregatePerSpaceExecutionEntry and includeMarkers to make the behavioral intent clear.packages/1-framework/3-tooling/cli/src/control-api/operations/db-apply-aggregate.ts (1)
130-134: ⚡ Quick winPlease replace transient planning references in comments with behavior-only wording.
Line 132 and Line 210 currently reference milestone/spec artifacts. Keep these comments implementation-focused so they remain stable as planning docs evolve.
As per coding guidelines, source-code comments must avoid transient artifacts like milestone references and sub-spec section pointers.
Also applies to: 206-210
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/control-api/operations/db-apply-aggregate.ts` around lines 130 - 134, Replace transient planning/spec references in the comments around the orphan-marker pre-flight logic in db-apply-aggregate.ts (the block mentioning "Orphan-marker pre-flight", "M2 marker-check", and the call runContractSpaceVerifierMarkerCheck and references to `db init` / `db update`). Reword those lines (including the ones around 206-210 and specifically the prior mention at line ~132) to behavior-only wording: describe what the code does (e.g., "refuse to apply when a marker row exists for a space that is not declared in the aggregate; this mirrors the marker check performed by runContractSpaceVerifierMarkerCheck and runs before planning to prevent applying with orphaned markers") and remove any milestone/spec section references or transient planning artifacts.packages/1-framework/3-tooling/cli/src/control-api/types.ts (1)
319-323: ⚡ Quick winRemove transient planning references from source comments.
Line 319, Line 540, and Line 628 include
sub-spec §/F4/F7style references. Please rewrite these comments to describe behavior directly without milestone/spec artifact pointers.As per coding guidelines, source-code comments must not reference transient artifacts such as sub-spec sections or milestone acceptance identifiers.
Also applies to: 540-541, 628-633
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/control-api/types.ts` around lines 319 - 323, Remove transient milestone/spec references from the inline comment that currently begins "M6 sub-spec § Output shape contract — every space involved in a run is observable in the success summary, including its post-apply marker, so the per-space invariant is visible to the user (closing F4 / F7 from `e2e-verification.md`)." and the other comment blocks that mention "sub-spec §" / "F4/F7"; instead rewrite each comment to state the behavior directly (for example: "Output contract: the success summary must include every space involved in a run along with its post-apply marker so users can observe the per-space invariant") and apply the same treatment to the similar comments that reference sub-specs or milestone IDs elsewhere in this file; keep the text focused on the observed behavior and acceptance criteria without referencing spec sections or milestone identifiers.packages/1-framework/3-tooling/migration/src/aggregate/strategies/graph-walk.ts (1)
59-62: ⚡ Quick winUse
ifDefined()for conditional spread here.Replace the inline conditional spread with the shared
ifDefined()helper for consistency with repo conventions.As per coding guidelines: “Use
ifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns.”🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/migration/src/aggregate/strategies/graph-walk.ts` around lines 59 - 62, The conditional spread when calling findPathWithDecision should use the shared helper ifDefined from `@prisma-next/utils/defined` rather than an inline ternary; update the call that constructs the options object (the call assigning outcome via findPathWithDecision(graph, fromHash, member.headRef.hash, { required, ...(refName !== undefined ? { refName } : {}), })) to import and use ifDefined to include refName only when defined (e.g., { required, ...ifDefined('refName', refName) } pattern), keeping the other parameters (graph, fromHash, member.headRef.hash, required) unchanged and ensuring the ifDefined helper is imported at the top of the file.packages/1-framework/3-tooling/cli/src/control-api/operations/migration-apply.ts (1)
209-209: ⚡ Quick winUse
ifDefined()instead of ternary conditional spreads for optional fields.This file mixes both patterns. Please switch these conditional spreads to
ifDefined(...)for consistency with the repo standard.Proposed refactor
- ...(isAppMember && refName !== undefined ? { refName } : {}), + ...ifDefined('refName', isAppMember ? refName : undefined), - ...(isAppMember && refName !== undefined ? { refName } : {}), + ...ifDefined('refName', isAppMember ? refName : undefined), - ...(appPlan.pathDecision.refName !== undefined - ? { refName: appPlan.pathDecision.refName } - : {}), + ...ifDefined('refName', appPlan.pathDecision.refName), - ...(pathDecision !== undefined ? { pathDecision } : {}), + ...ifDefined('pathDecision', pathDecision),Based on learnings, this repo prefers
ifDefinedfor conditional object spreads instead of inline ternary spread patterns.Also applies to: 234-234, 429-431, 450-450
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/1-framework/3-tooling/cli/src/control-api/operations/migration-apply.ts` at line 209, Replace the inline ternary spread patterns like ...(isAppMember && refName !== undefined ? { refName } : {}) with the repository-standard ifDefined(...) helper; e.g. use ifDefined(refName, v => ({ refName: v })) (or ifDefined(isAppMember && refName, v => ({ refName: v })) if the presence depends on a separate boolean) and apply the same replacement for the other conditional spreads mentioned (the similar ternary spreads at the other occurrences).
| The pattern: declare optional or target-varying features as **namespaced capabilities** in the contract or on the adapter profile. Verify them against the live database when relevant. Gate at every consumption site — the gate names a capability, never a target. When a capability is absent, the framework degrades, falls back, or refuses honestly; it never silently assumes. | ||
|
|
There was a problem hiding this comment.
Reframe capability ownership: adapter-reported; contract declares requirements.
This section currently presents capabilities as defined on the contract (contract.capabilities), which conflicts with the documented ownership model. Please reword and adjust the example to show contract requirements vs adapter-reported offered capabilities.
As per coding guidelines: "Describe capabilities as adapter-reported and negotiated at connect time" and "Frame contract capabilities as requirements, not definitions".
Also applies to: 27-37
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/architecture` docs/patterns/capability-gating.md around lines 10 - 11,
Update the wording and example so capabilities are presented as adapter-reported
(offered) and the contract only declares required capabilities; specifically
replace usages that treat contract.capabilities as definitions with phrasing
that the contract lists required capability names while adapters advertise
offered capabilities (e.g., adapter.capabilities or
adapter.profile.capabilities) and show a negotiation/connect-time check that
validates adapter-offered capabilities against contract requirements; also apply
the same reframing to the later example/use around the other occurrences
currently mirroring the incorrect ownership model.
| @@ -0,0 +1,11 @@ | |||
| // @ts-nocheck | |||
There was a problem hiding this comment.
Remove @ts-nocheck from declaration output.
This suppresses type safety entirely and violates repo policy for TS sources. If this is a placeholder, keep it type-checkable with explicit minimal declarations instead of globally disabling checks.
As per coding guidelines: **/*.{ts,tsx}: Never use @ts-nocheck``.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@examples/cipherstash-integration/migrations/cipherstash/contract.d.ts` at
line 1, Remove the top-level "// `@ts-nocheck`" in contract.d.ts and make the file
type-checkable by adding explicit minimal declarations for the exported
migration/contract types instead of disabling checks; specifically replace the
nocheck with concrete exported declarations (e.g., export interface/ type for
the contract/migration shapes or declare module exports used elsewhere) so every
exported symbol is typed and the file passes tsc without suppressing
diagnostics.
| async function main() { | ||
| const { databaseUrl } = loadAppConfig(); | ||
| const runtime = await db.connect({ url: databaseUrl }); | ||
|
|
There was a problem hiding this comment.
Validate command/arguments before opening the DB connection.
Right now, invalid/unknown command paths still attempt db.connect(...) first. That makes usage/error flows fail on DB/env issues even when no query should run. Parse/validate first, then connect only for executable commands.
Also applies to: 115-120
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@examples/paradedb-demo/src/main.ts` around lines 50 - 53, Validate and parse
the CLI command/arguments before opening the DB connection: in main, call your
argument parsing/validation routine (e.g., parseArgs/validateCommand) and check
that the requested command actually needs DB access and that arguments are
valid; only then call loadAppConfig() and await db.connect({ url: databaseUrl })
for executable DB commands. Move any error/usage reporting for unknown or no-op
commands to run before invoking db.connect so invalid commands never trigger a
database/env connection attempt (apply same change for the other occurrence
around the commands handling near the end of main).
| import { db } from '../prisma/db'; | ||
| import { ItemCollection } from './collections'; | ||
|
|
||
| const context = db.context as ExecutionContext<Contract>; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n examples/paradedb-demo/src/orm-client/client.ts | head -30Repository: prisma/prisma-next
Length of output: 741
🏁 Script executed:
rg -A 5 -B 5 "db.context" examples/paradedb-demo/src/orm-client/client.tsRepository: prisma/prisma-next
Length of output: 409
🏁 Script executed:
rg -r "ExecutionContext" examples/paradedb-demo/src/orm-client/ -A 2 -B 2Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
fd "*.ts" examples/paradedb-demo/src/orm-client/ | xargs cat -nRepository: prisma/prisma-next
Length of output: 346
🏁 Script executed:
rg "db\." examples/paradedb-demo/src/orm-client/client.ts -A 2 -B 2Repository: prisma/prisma-next
Length of output: 222
🏁 Script executed:
fd "contract" examples/paradedb-demo/src/ --type fRepository: prisma/prisma-next
Length of output: 158
🏁 Script executed:
rg "Contract" examples/paradedb-demo/src/orm-client/client.ts -B 5 -A 5Repository: prisma/prisma-next
Length of output: 553
🏁 Script executed:
find examples/paradedb-demo/src/prisma -name "*.ts" -type fRepository: prisma/prisma-next
Length of output: 150
🏁 Script executed:
cat examples/paradedb-demo/src/prisma/db.tsRepository: prisma/prisma-next
Length of output: 378
🏁 Script executed:
rg "context" examples/paradedb-demo/src/prisma/db.ts -B 3 -A 3Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
find . -path "*/packages/*postgres*" -name "*.ts" -type f | grep -i "runtime\|index" | head -20Repository: prisma/prisma-next
Length of output: 742
🏁 Script executed:
rg "function postgres" --type ts -B 5 -A 10Repository: prisma/prisma-next
Length of output: 7939
🏁 Script executed:
rg "export.*postgres" packages/sql-runtimes/postgres/src -A 5Repository: prisma/prisma-next
Length of output: 137
🏁 Script executed:
rg "type PostgresClient" packages/3-extensions/postgres/src/runtime/postgres.ts -B 2 -A 5Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
rg "interface PostgresClient" packages/3-extensions/postgres/src -B 2 -A 5Repository: prisma/prisma-next
Length of output: 828
🏁 Script executed:
rg "PostgresClient" packages/3-extensions/postgres/src/exports/runtime.ts -B 3 -A 3Repository: prisma/prisma-next
Length of output: 226
🏁 Script executed:
rg "context" packages/3-extensions/postgres/src/runtime/postgres.ts -B 5 -A 5Repository: prisma/prisma-next
Length of output: 2123
Remove unnecessary blind cast to ExecutionContext<Contract>.
The db.context property is already typed as ExecutionContext<Contract> by the PostgresClient<Contract> interface. The cast serves no purpose and should be replaced with an explicit type annotation:
-const context = db.context as ExecutionContext<Contract>;
+const context: ExecutionContext<Contract> = db.context;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const context = db.context as ExecutionContext<Contract>; | |
| const context: ExecutionContext<Contract> = db.context; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@examples/paradedb-demo/src/orm-client/client.ts` at line 8, Remove the
unnecessary type cast on db.context: replace the blind cast "db.context as
ExecutionContext<Contract>" by declaring a properly typed variable (e.g., const
context: ExecutionContext<Contract> = db.context) or simply use db.context
directly, since PostgresClient<Contract> already types db.context as
ExecutionContext<Contract>; update the reference to the symbols context,
db.context, ExecutionContext, Contract, and PostgresClient accordingly.
| if (mw.beforeExecute) { | ||
| await mw.beforeExecute(exec, ctx); | ||
| checkAborted(ctx, 'beforeExecute'); | ||
| // The framework only forwards the mutator the caller supplied; a | ||
| // pass-through `undefined` for non-mutating families is safe — the | ||
| // base `RuntimeMiddleware` declares the third parameter, and | ||
| // existing `(plan, ctx)` bodies that ignore it stay unchanged. | ||
| // The cast below is the single point at which the framework's | ||
| // generic mutator slot meets the (possibly absent) caller value; | ||
| // `runWithMiddleware` cannot synthesize a TMutator instance. | ||
| const work = mw.beforeExecute(exec, ctx, paramsMutator as TMutator); | ||
| if (work !== undefined) { | ||
| await raceAgainstAbort(Promise.resolve(work), ctx.signal, 'beforeExecute'); | ||
| } |
There was a problem hiding this comment.
Make onRow and afterExecute abort-aware too.
Line 96/106 adds cooperative cancellation for beforeExecute, but Line 119 and Line 147 still await middleware hooks directly. That leaves abort responsiveness inconsistent across middleware phases and can stall cancellation until hook completion.
💡 Suggested parity fix
for await (const row of rowSource as AsyncIterable<Row> | Iterable<Row>) {
if (source === 'driver') {
for (const mw of middleware) {
if (mw.onRow) {
- await mw.onRow(row as Record<string, unknown>, exec, ctx);
+ checkAborted(ctx, 'onRow');
+ await raceAgainstAbort(
+ Promise.resolve(mw.onRow(row as Record<string, unknown>, exec, ctx)),
+ ctx.signal,
+ 'onRow',
+ );
}
}
}
@@
const latencyMs = Date.now() - startedAt;
for (const mw of middleware) {
if (mw.afterExecute) {
- await mw.afterExecute(exec, { rowCount, latencyMs, completed, source }, ctx);
+ checkAborted(ctx, 'afterExecute');
+ await raceAgainstAbort(
+ Promise.resolve(mw.afterExecute(exec, { rowCount, latencyMs, completed, source }, ctx)),
+ ctx.signal,
+ 'afterExecute',
+ );
}
}Also applies to: 117-121, 145-148
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/1-framework/1-core/framework-components/src/execution/run-with-middleware.ts`
around lines 95 - 107, The middleware cancellation handling is inconsistent:
only beforeExecute is wrapped with checkAborted and raceAgainstAbort; make onRow
and afterExecute abort-aware the same way. In runWithMiddleware, call
checkAborted(ctx, 'onRow')/checkAborted(ctx, 'afterExecute') before invoking
mw.onRow or mw.afterExecute, and if the hook returns a value or a promise pass
it through raceAgainstAbort(Promise.resolve(...), ctx.signal,
'onRow'/'afterExecute') so these phases are cancelled consistently (use the
existing symbols mw.onRow, mw.afterExecute, raceAgainstAbort, checkAborted,
ctx.signal, and paramsMutator where needed).
| const runnerResult = await ( | ||
| runner as MultiSpaceCapableRunner<TFamilyId, TTargetId> | ||
| ).executeAcrossSpaces({ driver, perSpaceOptions }); | ||
|
|
||
| if (!runnerResult.ok) { | ||
| onProgress?.({ action, kind: 'spanEnd', spanId: APPLY_SPAN_ID, outcome: 'error' }); | ||
| return notOk({ | ||
| summary: runnerResult.failure.summary, | ||
| ...ifDefined('why', runnerResult.failure.why), | ||
| meta: { | ||
| ...(runnerResult.failure.meta ?? {}), | ||
| failingSpace: runnerResult.failure.failingSpace, | ||
| }, | ||
| }); | ||
| } | ||
| onProgress?.({ action, kind: 'spanEnd', spanId: APPLY_SPAN_ID, outcome: 'ok' }); | ||
|
|
There was a problem hiding this comment.
Ensure apply span is always closed when runner throws.
If executeAcrossSpaces() throws, Line 174/184 are skipped and the progress stream gets a dangling apply span.
Suggested fix
- const runnerResult = await (
- runner as MultiSpaceCapableRunner<TFamilyId, TTargetId>
- ).executeAcrossSpaces({ driver, perSpaceOptions });
+ let runnerResult;
+ try {
+ runnerResult = await (
+ runner as MultiSpaceCapableRunner<TFamilyId, TTargetId>
+ ).executeAcrossSpaces({ driver, perSpaceOptions });
+ } catch (error) {
+ onProgress?.({ action, kind: 'spanEnd', spanId: APPLY_SPAN_ID, outcome: 'error' });
+ throw error;
+ }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/1-framework/3-tooling/cli/src/control-api/operations/apply-aggregate.ts`
around lines 169 - 185, The apply span (APPLY_SPAN_ID) isn't closed if
runner.executeAcrossSpaces throws — wrap the await (runner.executeAcrossSpaces /
executeAcrossSpaces call on the MultiSpaceCapableRunner) in try/catch/finally:
in finally always call onProgress?.({ action, kind: 'spanEnd', spanId:
APPLY_SPAN_ID, outcome: ... }) and set outcome to 'ok' when runnerResult.ok is
true, otherwise 'error'; in the catch convert the thrown error into a notOk
failure (include error.message/summary and any meta like failingSpace if
available) and return it so the function always closes the span before
returning.
| if (await directoryExists(pkgDir)) { | ||
| return { written: false }; | ||
| } | ||
| await materialiseMigrationPackage(targetDir, pkg); | ||
| return { written: true }; |
There was a problem hiding this comment.
if-missing flow is race-prone and can clobber existing paths.
Current check-then-write is non-atomic: if pkgDir appears between directoryExists() and materialiseMigrationPackage(), the later rm(..., force: true) can delete and rewrite it. Same risk applies to pre-existing non-directory paths. Please switch to an atomic create-first approach (mkdir(pkgDir) and handle EEXIST) so existing paths are never removed in this code path.
Also applies to: 160-167
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/1-framework/3-tooling/migration/src/io.ts` around lines 153 - 157,
The current check-then-write flow around directoryExists(pkgDir) +
materialiseMigrationPackage(targetDir, pkg) is race-prone and may clobber
existing paths; change it to an atomic create-first approach by attempting to
create the package directory with mkdir(pkgDir) and treating EEXIST as "already
exists" (return { written: false }) while only proceeding to
materialiseMigrationPackage when mkdir succeeds, and avoid calling rm(...,
force: true) on an existing path; apply the same pattern to the other identical
block that currently performs the directoryExists check (the second instance
that uses pkgDir/targetDir/pkg).
|
|
||
| - **Type Generation**: Generates TypeScript type definitions for SQL contracts | ||
| - `generateContractTypes()`: Generates `contract.d.ts` file content (receives separate `codecTypeImports` and `operationTypeImports` arrays) | ||
| - `generateContractTypes()`: Generates `contract.d.ts` file content (receives a `codecTypeImports` array) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify generateContractTypes signature matches documentation
# Search for the function implementation
ast-grep --pattern 'generateContractTypes($$$) { $$$ }'
# Also search for the function definition/interface
rg -nP --type=ts -C5 'generateContractTypes\s*[:(]' -g '!**/node_modules/**' -g '!**/*.md'Repository: prisma/prisma-next
Length of output: 1695
🏁 Script executed:
#!/bin/bash
# Search for the hook interface or type definition
rg -nP --type=ts 'generateContractTypes' --max-count=20 -B3 -A3 -g '!**/node_modules/**' -g '!**/*.md' -g '!**/test/**'Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Search more broadly for generateContractTypes
rg -nP --type=ts 'generateContractTypes' -B5 -A5 | head -100
# Also search for hook interface definitions
rg -nP --type=ts 'interface.*Hook|type.*Hook' packages/2-sql/3-tooling/emitter -B2 -A8Repository: prisma/prisma-next
Length of output: 3152
🏁 Script executed:
#!/bin/bash
# Find generateContractDts implementation
rg -nP 'generateContractDts|function.*generateContractDts' --type=ts -B2 -A10 | head -60
# Search for hook interface definitions in emitter packages
find packages/2-sql/3-tooling/emitter packages/2-mongo-family/3-tooling/emitter -name '*.ts' -type f ! -path '*/node_modules/*' | head -20Repository: prisma/prisma-next
Length of output: 8379
🏁 Script executed:
#!/bin/bash
# Check the emitter package source files
cat packages/2-sql/3-tooling/emitter/src/index.ts | head -100
# Also look for the function signature
rg -nP 'export.*generateContractDts|function generateContractDts|const generateContractDts' packages/2-sql/3-tooling/emitter/src -B2 -A15Repository: prisma/prisma-next
Length of output: 3963
🏁 Script executed:
#!/bin/bash
# Search for generateContractTypes method in SQL emitter
rg -nP 'generateContractTypes' packages/2-sql/3-tooling/emitter/src -B2 -A20
# Also check the total line count of the file
wc -l packages/2-sql/3-tooling/emitter/src/index.tsRepository: prisma/prisma-next
Length of output: 112
🏁 Script executed:
#!/bin/bash
# Search for generateContractTypes and show full context
rg -nP 'generateContractTypes' packages/2-sql/3-tooling/emitter/src/index.ts -B3 -A25Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Read the README to see the actual documentation
cat packages/2-sql/3-tooling/emitter/README.md | head -100
# Search for the hook interface definition in framework-components
rg -nP 'generateContractTypes|GenerateContractTypes' packages/ -l --type=ts | grep -v node_modules | grep -v testRepository: prisma/prisma-next
Length of output: 2922
🏁 Script executed:
#!/bin/bash
# Check the hook interface definition
cat packages/1-framework/1-core/framework-components/src/control/emission-types.ts | head -150
# Also check the generateContractDts implementation
rg -nP 'generateContractTypes|export.*function.*generateContractDts' packages/1-framework/3-tooling/emitter/src/generate-contract-dts.ts -B3 -A15Repository: prisma/prisma-next
Length of output: 2424
🏁 Script executed:
#!/bin/bash
# Search for the TargetFamilyHook interface that the SQL emitter should implement
rg -nP 'interface.*TargetFamilyHook|type.*TargetFamilyHook' packages/ --type=ts -B2 -A30 | head -80Repository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Look at the SQL emitter source to see what interface it claims to implement
grep -n "export.*sqlEmission\|interface\|implements" packages/2-sql/3-tooling/emitter/src/index.ts | head -30
# Check the complete sqlEmission object definition
sed -n '20,150p' packages/2-sql/3-tooling/emitter/src/index.tsRepository: prisma/prisma-next
Length of output: 4805
🏁 Script executed:
#!/bin/bash
# Continue reading the sqlEmission object to find generateContractTypes
sed -n '150,300p' packages/2-sql/3-tooling/emitter/src/index.tsRepository: prisma/prisma-next
Length of output: 6595
🏁 Script executed:
#!/bin/bash
# Continue to see the rest of the sqlEmission object
sed -n '300,380p' packages/2-sql/3-tooling/emitter/src/index.tsRepository: prisma/prisma-next
Length of output: 2928
Rewrite the hook responsibilities to match the actual implementation.
The documented generateContractTypes() method does not exist on the hook interface. The actual hook provides helper methods (generateStorageType(), generateModelStorageType(), getFamilyImports(), getFamilyTypeAliases(), getTypeMapsExpression(), getContractWrapper(), and optionally resolveFieldTypeParams()). The framework's generateContractDts() function orchestrates contract emission by calling these helper methods. The codecTypeImports parameter belongs to the framework function, not a hook method.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/2-sql/3-tooling/emitter/README.md` at line 16, Update the README so
the hook responsibilities match the actual implementation: remove the
nonexistent generateContractTypes() entry and instead document the hook helper
methods generateStorageType(), generateModelStorageType(), getFamilyImports(),
getFamilyTypeAliases(), getTypeMapsExpression(), getContractWrapper(), and the
optional resolveFieldTypeParams(); clarify that generateContractDts() (the
framework function) orchestrates contract emission and that the codecTypeImports
parameter belongs to generateContractDts(), not to any hook method.
| "exports": { | ||
| "./column-types": "./dist/column-types.mjs", | ||
| "./control": "./dist/control.mjs", | ||
| "./middleware": "./dist/middleware.mjs", | ||
| "./migration": "./dist/migration.mjs", | ||
| "./operation-types": "./dist/operation-types.mjs", | ||
| "./pack": "./dist/pack.mjs", | ||
| "./runtime": "./dist/runtime.mjs", | ||
| "./package.json": "./package.json" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win
Add the canonical shared-plane export subpath (./adapter).
This package already exposes multi-plane-style subpaths (./control, ./runtime), but it’s missing ./adapter for the shared-plane surface.
Suggested export map adjustment
"exports": {
+ "./adapter": "./dist/pack.mjs",
"./column-types": "./dist/column-types.mjs",
"./control": "./dist/control.mjs",
"./middleware": "./dist/middleware.mjs",
"./migration": "./dist/migration.mjs",
"./operation-types": "./dist/operation-types.mjs",
"./pack": "./dist/pack.mjs",
"./runtime": "./dist/runtime.mjs",
"./package.json": "./package.json"
},As per coding guidelines: “Multi-plane entrypoints must use subpath exports: ./control for migration plane, ./runtime for runtime plane, ./adapter for shared plane, mapped in architecture.config.json.”
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "exports": { | |
| "./column-types": "./dist/column-types.mjs", | |
| "./control": "./dist/control.mjs", | |
| "./middleware": "./dist/middleware.mjs", | |
| "./migration": "./dist/migration.mjs", | |
| "./operation-types": "./dist/operation-types.mjs", | |
| "./pack": "./dist/pack.mjs", | |
| "./runtime": "./dist/runtime.mjs", | |
| "./package.json": "./package.json" | |
| "exports": { | |
| "./adapter": "./dist/pack.mjs", | |
| "./column-types": "./dist/column-types.mjs", | |
| "./control": "./dist/control.mjs", | |
| "./middleware": "./dist/middleware.mjs", | |
| "./migration": "./dist/migration.mjs", | |
| "./operation-types": "./dist/operation-types.mjs", | |
| "./pack": "./dist/pack.mjs", | |
| "./runtime": "./dist/runtime.mjs", | |
| "./package.json": "./package.json" | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/3-extensions/cipherstash/package.json` around lines 53 - 61, The
package export map is missing the shared-plane subpath "./adapter"; add an
exports entry for "./adapter" (pointing to the built entry, e.g.
"./dist/adapter.mjs") alongside the existing "./control" and "./runtime" entries
and ensure the built artifact exists, then update architecture.config.json to
include the "./adapter" mapping for the shared-plane surface so it aligns with
the multi-plane entrypoint conventions.
Extract control-policy as its own project. It was originally shaped inside projects/supabase-integration/posture.md but is a framework-level primitive orthogonal to Supabase: a per-IR-node control field that declares how much the framework control plane participates in an objects migration lifecycle. - ControlPolicy = managed | tolerated | external | observed - Verifier dispatches per node on the four-value table - Planner dispatches per node + cross-cutting safety check refusing to emit ops targeting an external namespace - Naming follows the control-plane/runtime-plane axis already in the codebase. Field control on the wire, type ControlPolicy in signatures and prose. Sequencing: blocked by TML-2459 (the verifier/planner SPIs are the natural seam for the dispatch tables); blocks Supabase integration. Linear: https://linear.app/prisma-company/issue/TML-2493
…n project The control-policy concept moved out of Supabase and into its own project (projects/control-policy, TML-2493). Supabase consumes it, does not introduce it. Across all design notes: - posture (field) -> control (field) - Posture (type/concept) -> ControlPolicy (type) - defaultPosture -> defaultControl - modeled -> managed - externally-managed (value) -> external - drift -> observed Tolerated keeps its name. Cross-references to projects/supabase-integration/posture.md now point to projects/control-policy/spec.md. The posture.md file is deleted; its content (rewritten in the new vocabulary and proper spec format) landed in the previous commit as projects/control-policy/spec.md. Also drops the functions block from the example contract artefacts in line with decisions.md C4 (functions are not contract elements in v0.1).
Final pass before handing TML-2459 to the implementer. Fixes three small handoff-readiness issues: - Drop dangling "Open Question 1" reference in spec Cost section. The sequencing it pointed at is already resolved by the plan (M1 foundation, M2 Mongo, M3 Postgres + validateContract migration). - Align stale m.constraints.ref(otherModel) prose in spec Approach section with the settled API in FR16b: existing constraints.foreignKey(cols.x, OtherModel.refs.y, ...) and rel.belongsTo(OtherModel, ...) reused; model handle carries the namespace coordinate; no new syntax. - Mirror the same alignment in plan M5b authoring DSL tasks (TS + PSL). Adds: - Linear ticket link (TML-2459) in plan summary for handoff discoverability. - Design-discipline note on M5b FK reference IR task: do not fuse namespace.id + name into a composite key so cross-contract refs can extend additively to (spaceId, namespace.id, name) in a follow-up project. Matches the extension-point contract added to spec FR16b in 9edbeef.
Resolve the architectural questions raised during the pressure-test of TML-2459 before implementation begins: - Drop the proposed Target<TContract, TSchema> aggregator interface in favour of extending the existing *ControlTargetDescriptor types with named SPI properties (contractSerializer, schemaVerifier) next to migrations. - Introduce a framework SchemaNodeBase abstract class so SQL and Mongo IR nodes share the freeze-and-assign pattern already proven on OpFactoryCall and MongoSchemaNode. - Pin Storage.namespaces: Record<string, Namespace> at the framework level so FR15 is enforced by the type system across every family. - Strengthen ContractSerializer rationale: serializeContract returns JsonObject; the symmetric SPI is the named seam used by round-trip tests, drift detection, and future canonicalization. - Keep todays SchemaIssue union as-is for this project; flag the layering observation as a forward concern rather than reshaping the type. - Pin the __unspecified__ namespace as a target-specific singleton subclass (PostgresUnspecifiedSchema, etc.) that overrides qualifier emission, so call sites stay polymorphic instead of branching on id. - Implement Mongo namespace semantics in M2 (database = namespace, default __unspecified__) so every family commits to the same Storage shape from the first consumer. - Drop the M3a/M3b split possibility; Postgres and SQLite share family abstract bases and must flip together to satisfy NFR1. - Add an M5a pre-flight task to audit PSL consumer cascade before lowering the grammar. - Move the 3-layer-IR ADR draft into M1 (close-out at minimum is the convention; drafting earlier here is cheap context insurance).
…dressed wire names Settle the last blocking RLS design hole (#19 RLS verifier check semantics) and three smaller working-assumption items. Design choice: policy wire names carry an 8-hex SHA-256 suffix over the canonical content tuple (using, withCheck, sort(roles), operation, as). Predicate equivalence collapses to a name match; the verifier never compares bodies for equivalence purposes. One body-level check remains (the per-row tamper check that recomputes the suffix against the introspected body). Three structural wins: - Predicate-equivalence false positives are eliminated by construction. Postgres-side expression-printer normalization (parens, whitespace, cast forms) no longer surfaces as policy_mismatch noise. - Policy rename detection is structurally free (matching hash, different prefix → ALTER POLICY ... RENAME TO). Closes the earlier defer-and-ship-drop+create working assumption. - No version marker needed; the contract storage hash already signals normalizer changes through VERIFY_CODE_HASH_MISMATCH (per ADR 004). Pinned decisions: - A8: TS as field, default permissive (mirrors PSL B2). - C9: content-addressed policy wire names. - C10: RLS verifier semantics built on C9 (rls_policy_renamed, rls_policy_tampered, rls_not_enabled issue kinds). - C11: implicit ENABLE RLS via model-level rls: auto|enabled|disabled. - OC4: content-addressed naming pattern as a future-project backport target for indexes, functions, views, check constraints. Artifacts: - New ADR draft at projects/supabase-integration/specs/ (promotes at close-out with assigned number). - rls.md verifier section rewritten to the content-addressed model. - design-holes.md #19 marked decided; verifier cluster closed.
…n with PN / Compute / PPg The skill captures rough edges a real user of Prisma Next, Prisma Compute, or Prisma Postgres would experience — anytime an operator (or agent) is *consuming* the public surface of one of these products and hits a surprise, workaround, or undocumented behaviour. Decoupled from the original "dogfooding pet project / hackathon entry" framing so it fires in any context where the operator is wearing the consumer role: extension authoring, example apps, integration tests, customer reproductions, internal demos. The consuming-vs-maintaining boundary excludes bugs in code the operator's own team maintains — those go in the regular product backlog, not the *-gotchas projects. Escalation mode is repo-based: - Product-team repo (e.g. prisma-next-ws) -> surface the gotcha and offer capture-as-gotcha / file-a-normal-bug / note-and-move-on. The operator picks. - Anywhere else -> silent-capture into the matching *-gotchas Linear project and the local gotchas log (the original dogfood-pipeline behaviour). Linear-project keys for the normal-bug path are left as TODOs pending confirmation per product.
Closes design holes #7, #8, #11, #13, #14 with one architectural decision (C12 in decisions.md): introduce the runtime target-layer class in v0.1 and have Supabase extend it. The runtime layer was class-based but missing the target tier: RuntimeCore (framework, abstract) <- SqlRuntimeImpl (sql family, internal). This project lifts SqlRuntimeImpl -> SqlRuntime (exported), adds PostgresRuntime (thin subclass, target-layer extension point), and lands SupabaseRuntime as PostgresRuntime's subclass. SET LOCAL is issued by SupabaseRuntime.execute() against the raw connection -- below the user middleware chain, structurally non-bypassable. Implicit transaction guarantees never-outlives-txn SET LOCAL state. JWT validation is eager behind a uniformly async factory. Mirrors TML-2459's three-layer IR pattern at the runtime axis. ~50 LOC of framework refactor; rationale and implementation sketch in the new ADR draft.
Decisions C6 (subpath-only entrypoints) and C12 (subclass-based runtime hierarchy) supersede earlier shapes that are still demonstrated across the component docs. This sweep aligns every example, table description, and prose reference with the canonical authoring + runtime surface: - `supabase.contract<C>(json)` factory removed everywhere except the historical "closed by" notes in decisions.md / design-holes.md. Replaced with subpath imports from /contract. - `supabase.pack()` callable replaced with value-imported `supabasePack` from /pack. - `supabase.roles.*` rewritten as `supabaseRoles.*` with imports from /contract. - `m.constraints.ref(otherModel)` stale syntax replaced with the settled `constraints.foreignKey()` / `rel.belongsTo()` API. - Example app contract.ts and prisma-next.config.ts brought in line with the new import shapes. - Per-doc Open Questions sections trimmed to remove items already closed by recent decisions; remaining entries are genuinely defaultable. - design-holes #20 promoted from open to closed (C6 fully supersedes the earlier shorthand-vs-longhand framing). No new decisions; this is housekeeping to make the docs internally consistent ahead of the project spec/plan finalization pass.
…projects Split the Supabase integration umbrella into four framework-primitive projects, each with its own spec and plan and its own Linear ticket: - cross-contract-refs (TML-2500): FK references across contract-space boundaries; brand machinery; supabase:auth.User PSL grammar; dependency graph + namespace ownership. - postgres-rls (TML-2501): RLS policies + Postgres roles as target-only IR; .rls(...) TS surface + policy <name> PSL surface; content-addressed wire names; verifier + planner. - runtime-target-layer (TML-2502): export SqlRuntime; new PostgresRuntime extends SqlRuntime; protected withRawConnection below-middleware accessor; transaction primitive formalisation. - extension-supabase (TML-2503): the user-facing @prisma-next/extension-supabase package + working example app (integration project; depends on all primitives). The three middle-tier primitives can ship in parallel; extension-supabase consumes all three. Together with target-extensible-ir (TML-2459) and control-policy (TML-2493) they form the full dependency graph for the May 18 launch. projects/supabase-integration/ becomes an umbrella tracker. The umbrella README, overview, and decisions log are kept and updated to point at the new constituent specs. Three component design docs were migrated into the constituent specs and removed: - cross-contract-refs.md -> cross-contract-refs/spec.md - rls.md -> postgres-rls/spec.md - extension-package.md -> extension-supabase/spec.md Two ADR drafts were moved alongside: - specs/adr-content-addressed-policy-names.md -> postgres-rls/specs/ - specs/adr-runtime-target-layer.md -> runtime-target-layer/specs/ All stale cross-references across the umbrella docs and the design-hole log were rewritten to point at the new locations.
…esh-eyes readability Restructure the ADR so a teammate without prior context can follow it: - Lead with the decision (one paragraph + naming format). - Follow with a worked example end-to-end: authored TS, IR shape after lowering, emitted CREATE POLICY, and verifier behavior under a rename. - Build up the why via cast forms, outer parens, whitespace, keyword casing — concrete drift axes rather than a single parenthetical aside. - Move the four rejected alternatives (verbatim, verbatim+normalizer, canonicalize-at-CREATE, JS-side parser) to the end so the reader is not loaded with rejected paths before the chosen one. - Drop transient project state: Draft / promoted-at-close-out status, v0.1 framing, per-project decision pointers, Status notes section, OC4 / decisions-log cross-reference. - Fold the redundant Future evolution and Consequences > Negative treatments of normalizer stability into a single section. - Spell out the collision math (birthday paradox on 32 bits ~= 65k policies on the same table) rather than leaving the reader to do it.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.agents/skills/record-gotcha/SKILL.md:
- Around line 162-170: Add a language identifier to the fenced code block that
contains the commit message template (the triple-backtick block starting with
the commit template "docs(gotcha): record <one-line title>") to silence
markdownlint; update the opening fence from ``` to ```text so the block is
explicitly marked as plain text.
- Around line 62-64: Replace the three TODO placeholders in the table rows for
Prisma Next, Prisma Compute, and Prisma Postgres with the actual Linear project
keys for their main backlog projects (replace `TODO: confirm main PN project
key`, `TODO: confirm main Compute project key`, and `TODO: confirm main PPg
project key` with the correct keys for the `pn-gotchas`, `compute-gotchas`, and
`ppg-gotchas` projects), ensuring the values match what the "Filing a normal
bug" workflow expects so that the workflow (referenced as "Filing a normal bug")
can use those keys when creating tickets.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: ec334591-68e2-4f6a-ac45-7cc1ee3a1b19
📒 Files selected for processing (1)
.agents/skills/record-gotcha/SKILL.md
| | **Prisma Next** ORM (`@prisma-next/*`, schema, generated client, migration tooling) | [`pn-gotchas`](https://linear.app/prisma-company/project/pn-gotchas-a6f6f5157a5c/overview) | `TODO: confirm main PN project key` | | ||
| | **Prisma Compute** (`@prisma/compute-cli`, deploy, runtime, env management) | [`compute-gotchas`](https://linear.app/prisma-company/project/compute-gotchas-dd3ac34b5ad4/overview) | `TODO: confirm main Compute project key` | | ||
| | **Prisma Postgres** (PPg, `@prisma/dev`, dialect behaviour) | [`ppg-gotchas`](https://linear.app/prisma-company/project/ppg-gotchas-afe77336f696/overview) | `TODO: confirm main PPg project key` | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Search for Linear issues tracking the TODO resolution
rg -n "TODO.*confirm main.*project key" --type=mdRepository: prisma/prisma-next
Length of output: 44
🏁 Script executed:
git ls-files | grep -i "SKILL.md"Repository: prisma/prisma-next
Length of output: 1022
🏁 Script executed:
if [ -f ".agents/skills/record-gotcha/SKILL.md" ]; then
echo "File exists"
wc -l ".agents/skills/record-gotcha/SKILL.md"
echo "---"
sed -n '60,70p' ".agents/skills/record-gotcha/SKILL.md"
else
echo "File does not exist"
fiRepository: prisma/prisma-next
Length of output: 1489
🏁 Script executed:
sed -n '170,190p' ".agents/skills/record-gotcha/SKILL.md"Repository: prisma/prisma-next
Length of output: 1507
🏁 Script executed:
rg -n "TODO\|Linear\|project key" ".agents/skills/record-gotcha/SKILL.md" | head -30Repository: prisma/prisma-next
Length of output: 44
Resolve Linear project keys for main backlog column before merge.
The table (lines 62–64) contains three TODO placeholders for main backlog project keys that are directly referenced by the "Filing a normal bug" workflow (line 182). The workflow explicitly instructs to "fill in the TODOs" when filing tickets, so these must be resolved for the skill to be functional.
Confirm and fill in the three project keys for Prisma Next, Compute, and Postgres main backlogs.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.agents/skills/record-gotcha/SKILL.md around lines 62 - 64, Replace the
three TODO placeholders in the table rows for Prisma Next, Prisma Compute, and
Prisma Postgres with the actual Linear project keys for their main backlog
projects (replace `TODO: confirm main PN project key`, `TODO: confirm main
Compute project key`, and `TODO: confirm main PPg project key` with the correct
keys for the `pn-gotchas`, `compute-gotchas`, and `ppg-gotchas` projects),
ensuring the values match what the "Filing a normal bug" workflow expects so
that the workflow (referenced as "Filing a normal bug") can use those keys when
creating tickets.
| ``` | ||
| docs(gotcha): record <one-line title> | ||
|
|
||
| <2–4 line summary: what was surprising, workaround in one sentence, | ||
| ticket id.> | ||
|
|
||
| Filed as <TML-XXXX> in <gotchas project name>. | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Add language identifier to fenced code block.
The commit message template fenced code block is missing a language identifier, which triggers a markdownlint warning.
📝 Proposed fix
-```
+```text
docs(gotcha): record <one-line title>
<2–4 line summary: what was surprising, workaround in one sentence,
ticket id.>
Filed as <TML-XXXX> in <gotchas project name>.
</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.22.1)</summary>
[warning] 162-162: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.agents/skills/record-gotcha/SKILL.md around lines 162 - 170, Add a language
identifier to the fenced code block that contains the commit message template
(the triple-backtick block starting with the commit template "docs(gotcha):
record ") to silence markdownlint; update the opening fence from
totext so the block is explicitly marked as plain text.
</details>
<!-- fingerprinting:phantom:triton:puma -->
<!-- d98c2f50 -->
<!-- This is an auto-generated comment by CodeRabbit -->
At a glance
After this project lands, a Postgres contract can declare its schemas, place models in them, and reference across them — the capability that today's IR can't express:
For multi-tenant Postgres, the new
__unspecified__sentinel lets the connection'ssearch_pathbind models to a tenant schema at runtime — the contract describes the shape, the connection picks the namespace.The PSL surface is structurally parallel to the TS builder; switching between them stays mechanical for users.
What this PR is
This is a scoping PR. It lands the project's spec and plan as documentation under
projects/target-extensible-ir/; execution lands as 7 follow-up PRs across the milestones below. The substance of this PR is the scope review — implementation review comes later, milestone by milestone.projects/target-extensible-ir/spec.mdprojects/target-extensible-ir/plan.mdThe problem we're solving
Today, every Postgres contract lives implicitly in the
publicschema. Postgres-specific concepts — additional schemas,CREATE TYPE … AS ENUM, RLS policies, custom functions — either get smuggled through the untypedannotations: SqlAnnotationsescape hatch, are reconstructed outside the IR (enums today are glued in via codec control hooks), or simply aren't supported at all (RLS, custom functions, multi-namespace FKs).Authors hit this directly:
auth.users(Supabase's user table, in theauthschema) cannot be referenced from a user's contract because the IR has no way to express "a table in a different schema." That single constraint is what blocks the Supabase integration, and it's the same architectural shape that prevents future first-class support for RLS, custom Postgres functions, MySQL databases, and dialect-specific concepts in any future SQL target (CockroachDB, planetscale, neon).The decision
Refactor Contract IR and Schema IR from flat data shapes (
type = { … }) into a polymorphic class hierarchy layered as framework interfaces → family abstract bases → target concrete classes. Targets gain a typed home for dialect-specific concepts; framework consumers depend on framework SPI interfaces, not on concrete target classes. TheNamespaceconcept becomes first-class at the framework layer; authoring DSL surfaces in PSL and the TS builder expose it.The recipe isn't new — it's already proven in production by
OpFactoryCall(ADR 195) for the migration-op layer and byMongoSchemaNodefor Mongo's schema IR. This project consolidates the convention and applies it consistently to the two IRs that haven't yet adopted it.What this delivers
Three user-facing capabilities ship with this project:
"<schema>"."<table>"qualifier in DDL.public.profiles.user_id REFERENCES auth.users(id)works end-to-end. Single biggest direct unblocker for Supabase.__unspecified__+ connectionsearch_pathlets one contract describe a shape that's bound to different schemas at runtime. Real customer story for SaaS Postgres deployments independent of Supabase.Why this clears the path for Supabase
Without this project, every Supabase deliverable (RLS policies,
createSupabaseRuntime,auth.usersqueryable surface) forces an ad-hoc IR decision and becomes "feature + foundational refactor in disguise."After this project, those deliverables are bounded feature PRs on a stable foundation: introduce
PostgresRlsPolicyas a target-only IR kind (small PR following the established recipe), build the Supabase contract package modellingauth.usersas a contract-space contribution, ship the runtime split. Rough sizing: 4–6 weeks of focused Supabase work after this project lands. The trade is more time-to-Supabase now in exchange for much less time-to-Supabase-and-everything-after-it later.Sequencing
Seven milestones, M1 → M2 → M3 → M4 → M5a → M5b → M6:
External strict-precedence dependencies (must land before M1):
TML-2464 (strip single/multi-space branching) must land after this project so its branch-removal touches IR-walking sites once, post-IR-flip, rather than twice.
Alternatives considered
annotations: SqlAnnotationsand target-specific code paths). Rejected: shim-shaped IR becomes load-bearing in production; removing it later requires this project plus a migration of every shimmed concept. We'd buy 2–3 weeks of Supabase delivery now and pay it back many times over across every Supabase-adjacent feature and every future SQL target.refIn(otherSpace, …)into this project). Rejected as out of scope: cross-contract reference semantics tangle with TML-2397's contract-spaces work and require their own design pass. Cross-namespace (within one contract space) is in scope and delivers the user-visible Supabase value; cross-contract-space is a separate follow-up.Test plan
This PR is docs-only; verification is reviewer-shaped:
blockedBychains).blockedBychain (TML-2468 → … → TML-2474; M1 also blocked by TML-2457, TML-2463, TML-2408).Refs: TML-2459
Summary by CodeRabbit
New Features
Documentation
Improvements
type+optionswith registry/validation and clearer diagnostics.