Kinetic Risk: Securing and Governing Physical AI in the Wild
Padma Apparao
[un]prompted 2026 — AI Security Practitioner Conference · Day 1 · 2
When AI moves from screens to the physical world, errors stop being recoverable — they become kinetic events measured in force, speed, and mass. Padma Apparao of Intel argues that physical AI requires an entirely different security and governance model, where latency is a safety KPI, human-in-the-loop is architecturally impossible, and governance must be embedded inside the system rather than written in external policy documents. ---
AI review
Apparao is raising an alarm that most AI security practitioners are not equipped to hear — physical AI failure modes are irreversible, operate below human reaction time, and the governance frameworks we've built are useless for them. The latency-as-safety-KPI argument is precise and undeniable. This talk needs to be in front of robotics engineers, not just security teams.