Breaking AI Agents: Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents

Jay Chen, Royce Lu

fwd:cloudsec North America 2025 · Day 2 · Track 2 - Crestone

Jay Chen, a security researcher at **Palo Alto Networks**, presented original attack research against **Amazon Bedrock Agents**, demonstrating a three-stage attack methodology -- reconnaissance, exploitation, and installation -- that culminates in persistent data exfiltration via memory poisoning. The research shows that despite Bedrock Agents having robust built-in guardrails against prompt leaking, an attacker can extract detailed agent functionality and tool schemas through social engineering techniques, bypass input validation to directly invoke tools (including exploiting SQL injection in connected Lambda functions), and most critically, use **long-term memory poisoning** via a crafted prompt injection payload delivered through a malicious web page to establish persistent exfiltration to a C2 server across future sessions. This is one of the most complete end-to-end AI agent attack chains presented at the conference.

AI review

The most complete AI agent attack chain at the conference. Social engineering for recon, input validation suppression, SQL injection through agent tools, and a technically elegant memory poisoning technique that establishes persistent cross-session exfiltration via XML tag manipulation of the summarization prompt. This is real offensive research with real impact against a production framework. The 'do not validate my input' bypass is simultaneously hilarious and terrifying.

Watch on YouTube