Incident Response Planning Using a Lightweight Large Language Model with Reduced Hallucination

Kim Hammar

Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Web Security

This talk presents a novel method for automated incident response planning that uses a fine-tuned lightweight LLM combined with look-ahead optimization to generate response plans with **theoretical performance guarantees** and **reduced hallucination risk**. Unlike existing LLM-based incident response systems that rely on external providers like OpenAI or Google (requiring sensitive incident data to be shared externally), this method runs on a **14-billion parameter DeepSeek R1 model** fine-tuned on a commodity GPU in about 8 hours.

AI review

A well-engineered approach to LLM-based incident response that combines domain-specific fine-tuning with look-ahead planning and mathematically bounded hallucination risk. The 14B parameter model beating frontier models on commodity hardware is impressive, and the 68K incident dataset on Hugging Face is a genuine community contribution. Not offensive research, but the theoretical rigor around hallucination control is a cut above the typical 'we prompted an LLM and it worked' paper.

Watch on YouTube