- Per-memory cost attribution columns + pulse engine equal-split distributionCloses the named rev-158 next-sprint candidate ('memory retrieval cost attribution — letting per-memory entries carry an estimated AI cost contributed metric so operators triaging a load-bearing-but-token-heavy memory entry can see the full picture'). Two new columns on memory_entry: `totalAttributedInputTokens` + `totalAttributedOutputTokens` (integer NOT NULL default 0). The pulse engine's `workNextTask()` distributes each cycle's per-task token delta across every memory entry retrieved that cycle by equal split — same defensible methodology as the rev-57 per-source cost attribution. Single batched UPDATE rides one round-trip per cycle so the cost-attribution write costs nothing on the steady-state retrieval path. Pure additive — does not change retrieval semantics, only attaches a cost stamp to each retrieved row.
- Top-cost memory dashboard panel + GET /api/v1/memory/top-costNew `getTopCostMemoryEntries()` helper sorts memory entries desc by cumulative attributed cost. New brand-amber `TopCostMemoryPanel` sidebar mounts beside the rev-158 brand-purple `TopRetrievedMemoryPanel` so the three memory observability panels (slate-staleness rev 153 + brand-purple-retrieval rev 158 + brand-amber-cost rev 159) stack cleanly with one consistent vocabulary at three distinct attention levels. Each row shows `💸 $X.XX` cost amount, kind chip, pinned flag, title, proportional brand-amber bar, plus a meta line with token total + retrieval count + importance + tags. Hidden when no memory entry has accrued any attributed cost yet (fresh workspaces or workspaces that never run AI cycles). New bearer-auth `GET /api/v1/memory/top-cost?limit=5` v1 endpoint mirrors the dashboard primitive in lockstep — MCP hosts answering 'which memory entries are the AI's most expensive?' get a one-call answer.
- Extended GET /api/v1/memory listing with totalAttributedInputTokens, totalAttributedOutputTokens, and estimatedCostUsdCloses the cost-axis projection on the per-memory listing surface at parity with the per-task axis (rev 51's totalInputTokens + totalOutputTokens projected on /api/v1/tasks). Every row on the rev-153 listing endpoint now carries `totalAttributedInputTokens`, `totalAttributedOutputTokens`, and `estimatedCostUsd` alongside the rev-153 `retrievalCount` + `lastRetrievedAt` + rev-158 `retrievals7d`. MCP hosts ranking memory by AI cost without a follow-up call per entry — three retrieval-state signals (cumulative + recency + recent-activity) plus three cost-attribution signals (input + output + USD) on every memory listing row.
- OpenAPI 3.1 typed coverage on the rev-159 endpoint + extended MemoryEntry schema — 81st unbroken cadence revThe OpenAPI 3.1 spec types the new `GET /memory/top-cost` endpoint with full request/response schemas (limit query param 1-20 default 5 + response shape with `memory[]` array of typed rows including memoryId + kind enum + title + importance + pinned + tags + totalAttributedInputTokens + totalAttributedOutputTokens + estimatedCostUsd + retrievalCount + lastRetrievedAt nullable date-time). The shared `MemoryEntry` schema component picks up the three new cost fields so MCP-host code generators reading the spec see them on every memory listing surface. The cadence pattern from rev 78 onward (every dashboard primitive gets typed in the OpenAPI 3.1 spec in the same cycle it ships) reaches its 81st unbroken rev with rev 159. The per-memory observability cluster on the protocol-bound surface is now thirteen axes deep — the MCP server's per-memory observability tooling has nothing left to design across all thirteen axes.