From Security to Safety: Navigating the Ethics of AI as Red Teamers and Penetration Testers
Jeremy Miller
NorthSec 2025 · Day 1 · Ville-Marie
Jeremy Miller argues that security practitioners — particularly red teamers and penetration testers — are better equipped than they realize to take on AI safety work, despite that domain's grounding in normative ethics rather than objective technical facts. Drawing on philosophy of epistemology, he reframes security work as fundamentally a knowledge-seeking practice, not a systems-manipulation practice, and shows why that epistemic orientation is exactly what AI safety challenges require. The discomfort red teamers feel when encountering AI safety is not a deficit; it is a signal that they are encountering genuine moral reasoning for the first time — and they already have the tools to engage with it. ---
AI review
OffSec philosopher argues security practitioners are epistemically well-equipped for AI safety work by reframing pentest as knowledge-seeking rather than systems-manipulation. Heavy on Hume, light on demonstrations.