Found in Translation: A Generative Language Modeling Approach to Memory Access Pattern Attacks
Grace Jia
34th USENIX Security Symposium (USENIX Security '25) · Day 3 · System Security 5: Securing Systems and Protocols
In the realm of confidential computing, where sensitive applications process data within hardware-protected environments, a new class of sophisticated side-channel attacks continues to emerge. This talk, "Found in Translation," presented by Grace Jia from Yale University, unveils a novel memory access pattern attack that leverages generative language modeling to infer private, object-level data from seemingly innocuous page-level access traces. The research, a collaborative effort with Alex Wong and Enrag Kandwal, demonstrates a practical and highly accurate method for an OS-level adversary to breach the confidentiality guarantees of trusted execution environments.
AI review
Solid academic research that meaningfully advances the SGX/enclave side-channel attack surface by grafting sequence-to-sequence language modeling onto page-access inference — a genuinely novel framing that outperforms Markov and frequency baselines on real workloads. The 70-99% accuracy range on DLRM, medical LLM, and HNSW across both Nitro and SGX is the kind of empirical breadth that separates real papers from toy demos. Doesn't quite crack the 5-star ceiling because the defensive section is thin and the architectural choices (BERT encoder-decoder) could use harder justification against…