- Per-memory consecutiveSpikeDays counter + chronicAckedAt column + counter maintenance in pingMemoryCostSpikesCloses the named rev-162 next-sprint candidate ('per-memory chronic-spike counter + warning'). Two new columns on memory_entry: `consecutiveSpikeDays` (integer NOT NULL default 0 — increments every day the rev-161 detector flags the entry as spiking; resets to 0 the first day it doesn't) and `chronicAckedAt` (timestamp nullable — stamped when the operator chronic-acks the entry for the rev-163 7-day TTL window). The rev-161 pingMemoryCostSpikes daily sweep now maintains the counter on every sweep via batched UPDATEs (one to bump spiking entries, one to reset entries that previously had a non-zero counter but aren't currently spiking). Mirrors the rev-61 source counter + rev-64 assignee counter + rev-70 tag counter at the per-memory axis on the cost dimension. Independent of `costSpikeAckedAt` (rev 162 — per-day mute) — the counter keeps growing through ack-and-spike-again cycles so a memory entry that's been 'ack me daily' for a week shows the strongest possible chronic-noise signal. Pure additive on top of the rev-161 daily detector — no change to the rev-161 behavior unless an operator's entry crosses the chronic threshold.
- Per-memory chronic warning Slack push + memory.chronic_warning outbound event in pingMemoryCostSpikes sub-sweepNew chronic-warning sub-sweep added to pingMemoryCostSpikes mirrors the rev-70 tag chronic / rev-72 source chronic / rev-72 assignee chronic patterns at the per-memory axis. For each workspace where any memory entry's counter has crossed the chronic threshold (3 days) AND hasn't been chronic-acked within the 7-day TTL: (a) Slack push via the new buildMemoryChronicWarningSlackPayload() block (header `:hourglass_flowing_sand: Chronic per-memory cost spike` + per-entry rows with `Nd in a row`, ratio, today $, retrieval count + recommendation copy 'consider pinning, raising importance, or refactoring the surrounding tasks'), (b) outbound memory.chronic_warning event via dispatchMemoryChronicWarningWebhook(), (c) memory_chronic_warning activity-log entry rate-limited to once per workspace per 24h. Same dead-Slack-webhook auto-clear path as the rev-161 daily push. Distinct from the rev-161 daily ⚡ alarm — chronic warning names a *structural* problem (the entry is being retrieved too often by too many cycles) so the right operator response is structural (pin / raise importance / refactor surrounding tasks), not 'stop alarming today.'
- Chronic-ack chip + chronic bulk-ack bar on TopCostMemoryPanel + per-row ⏳ Nd chronic chipNew MemoryChronicAckButton client component mounts inline beside the rev-163 ⏳ chronic chip on every chronically-spiking row of the rev-159 TopCostMemoryPanel. Brand-amber palette (`rgba(232,159,75,*)`) distinct from the rev-162 brand-red daily ack chip so operators read both ack horizons (today / structural) at two distinct attention levels on the same row. New chronic bulk-ack bar surfaces above the row list when canAck && visibleChronicCount >= 2 — mirrors the rev-162 daily bulk-ack bar at the chronic axis. New `⏳ Nd in a row` chronic chip on every row whose counter has crossed the threshold AND hasn't been chronic-acked within the 7-day TTL. The full row-level reading order is now consistent: ⚡ (today's alarm) ↔ Ack (mute today) :: ⏳ Nd (structural alarm) ↔ Ack 7d (mute 7d). Three layers of cost-axis context on every row (cumulative + trajectory + daily alarm + chronic alarm) — operators triaging a load-bearing memory entry now see the full descriptive→defensive picture without leaving the panel.
- v1 endpoints (chronic-warnings GET + chronic-ack POST + bulk POST) + memory.chronic_warning_acked closure + OpenAPI typed coverage — 85th unbroken cadence revFour new endpoints close the chronic axis on the protocol-bound surface in the same cycle the dashboard primitive ships: GET /api/v1/memory/chronic-warnings (returns memory entries whose counter has crossed the chronic threshold AND haven't been chronic-acked within the 7-day TTL), POST /api/v1/memory/{memoryId}/chronic-ack (single-entry chronic ack — mirrors `/sources/{id}/chronic-ack` rev 72 + `/cost/by-tag/{tag}/chronic-ack` rev 71 at the per-memory axis), POST /api/v1/memory/chronic-ack/bulk (bulk chronic-ack up to 50 IDs — mirrors `/sources/chronic-ack/bulk` rev 87 + `/cost/by-tag/chronic-ack/bulk` rev 87 at the per-memory axis on the chronic horizon). Plus matching dashboard endpoints (POST /api/memory/{id}/chronic-ack + POST /api/memory/chronic-ack/bulk). New memory.chronic_warning + memory.chronic_warning_acked outbound events with full payload typing — the closure receipt fires from both single + bulk ack so downstream FinOps integrations can reconcile alarm-open with alarm-acknowledged at the per-knowledge-entity axis on the chronic horizon. The OpenAPI 3.1 spec types every new endpoint with full request/response schemas plus the chronic-warnings response shape with the same field projection as the rev-161 daily endpoint plus consecutiveSpikeDays + chronicAckedAt. The OpenAPI spec changelog header gains a rev-163 block explaining the chronic axis closure on the seventh alarm-cluster axis. The cadence pattern from rev 78 onward (every dashboard primitive gets typed in the OpenAPI 3.1 spec in the same cycle it ships) reaches its 85th unbroken rev with rev 163. The cost-spike alarm cluster on the protocol-bound surface now closes both the daily horizon (rev 161/162) AND the chronic horizon (rev 163) on every axis where chronic makes sense (per-source / per-assignee / per-tag / per-memory). The MCP server has nothing left to design across detect → triage → ack on either horizon.