Read Between The Logs: A New Vulnerability in Gemini Cloud Assist Proves the Threat is Real

Liv Matan

fwd:cloudsec North America 2025 · Day 2 · Track 2 - Crestone

Liv Matan, a cloud vulnerability researcher on the **Tenable Cloud Security Research Team**, presented the discovery of a new attack class: **log poisoning to prompt injection** targeting cloud-integrated AI assistants. By injecting prompt injection payloads into HTTP request headers (specifically the User-Agent header), an attacker can poison the logs of a victim's cloud environment. When a defender then uses **Gemini Cloud Assist** to analyze those logs via GCP's "Explain this log entry" feature, the poisoned logs are passed to Gemini as part of an automated prompt, triggering the injection. The vulnerability was reported to Google and fixed, but the underlying attack class -- using cloud logs as a delivery mechanism for prompt injection against AI-powered analysis tools -- represents a persistent and expanding threat surface.

AI review

A genuinely novel attack class -- log poisoning as a prompt injection delivery mechanism against cloud AI assistants. The User-Agent header to Gemini Cloud Assist pipeline is elegant, the JSON context escape is well-crafted, the blast radius across all GCP public network services is significant, and the Azure Copilot handler bypass is a nice bonus. Real vulnerability, real fix from Google, real PoC. This is what cloud security research should look like.

Watch on YouTube