AI's Bitter Lesson for SOCs: Let Machines Be Machines
Jackie Bow, Peter Sanford
BSidesSF 2025 — Here Be Dragons · Day 2 · Main
The detection and response team at Anthropic built an AI-assisted investigation platform called Clue in roughly three months using Claude as both a co-engineer and runtime investigator, without any fine-tuning or specialized ML training. Drawing on the AI research concept of the "bitter lesson" — which holds that general methods beat hand-encoded human reasoning — Jackie Bow and Peter Sanford argue that SOCs should stop trying to codify every analyst decision into SOAR playbooks and instead let foundation models reason freely over their data, with transparent tooling to verify every step. ---
AI review
Bow and Sanford built a real AI investigation platform in three months using foundation models out of the box, showed the demo, explained the architecture, and told you exactly how to replicate it. The bitter lesson framing is not just philosophy — it directly explains why SOAR playbooks fail for novel attack patterns. This is the practical SOC-AI talk the industry has been waiting for.