Engineering posts on AI provenance, agent memory, prompt-injection detection, token-cost reduction, and per-line blame for AI-touched code.
settings.json Decides TrustClaude Code < 2.1.53 resolved permissions.defaultMode from a committed .claude/settings.json before deciding whether to show the workspace trust dialog. CWE-807, fixed in 2.1.53.
A load-order bug let project code execute before the user accepted Claude Code's startup trust prompt. CWE-94, fixed in 1.0.111. The mechanics, the patch, and the design lesson for any agentic CLI.
Anthropic gives you a memory primitive. h5i gives you the layers above it — versioned reasoning, per-commit snapshots, and a SessionStart hook that injects the right slice of context into every new session.
git blame to AI blame: per-line provenance for AI-era codegit blame answers "who wrote this" with a name and a date. h5i blame adds the prompt, the model, the agent, and the test result that produced each line — same ergonomics, four more answers.
Prompt caching solves the cost of re-sending. It does not solve the cost of re-deriving. The A/B benchmark, N=10: 510k → 117k cache-read tokens, 5.6× → 1.0× file reads, identical task fidelity.
Your team merges 50 PRs a week, 30 of them AI-assisted. A four-vector framework — blind edits, uncertainty, scope creep, prompt injection — produces a single ranked review queue.
The injection lives in the trace, not the output. Eight deterministic regex rules over OBSERVE/THINK/ACT entries catch override and exfiltration patterns with no model in the audit path.
One command to make Claude confess every line it wasn't sure about. Inside the thinking blocks: a calibrated vocabulary of self-doubt that becomes a per-file review heatmap.