Weaponizing Apple AI for Offensive Operations
Black Hat USA 2025 · Day 1 · Briefings
A lead red teamer at CVS Health demonstrated how Apple's native AI frameworks — CoreML, Vision, and AVFoundation — can be weaponized for C2 operations, payload staging, and evasion. None of the techniques were detected by tested EDR or antivirus engines, exploiting a fundamental blind spot: security tools do not scan `.mlmodel` files. ---
AI review
Solid evasion technique with genuine real-world impact — embedding payloads in CoreML weight tensors and pixel steganography is novel enough to deserve attention. But MLARC is a proof-of-concept C2 with obvious limitations, the underlying insight is 'parsers don't inspect opaque binary formats,' and the defensive section is thin. Practitioners will learn something; researchers will shrug.