This Week in AI Security - 16 October 2025

Modern Cyber with Jeremy Snyder - Een podcast door Jeremy Snyder

Podcast artwork

Categorieën:

In this week's episode of This Week in AI Security, Jeremy covers four key developments shaping the AI security landscape.Jeremy begins by analyzing a GitHub Copilot flaw that exposed an LLM vulnerability similar to the one Jeremy disclosed last week. Researchers were able to use a hidden code comment feature to smuggle malicious prompts into the LLM, allowing them to potentially exfiltrate secrets and source code from private repositories. This highlights a growing risk in how LLMs process different input formats.Next, we discuss a fascinating research paper demonstrating the effectiveness of data poisoning. The study found that corrupting a model's behavior was possible with as few as 250 malicious documents—even in models with large training sets. By embedding a malicious command that mimicked sudo, researchers could implement a backdoor that sends data out, proving that the Attack Success Rate (ASR) is a critical metric for this real-world threat.We then examine a story at the intersection of agentic AI and supply chain risk, where untrusted actors exploited vulnerabilities in AI development plugins. By intercepting system prompts that lacked proper encryption, an attacker could discover the agent's permissions and potentially exfiltrate sensitive data, including Windows NTLM credentials.Finally, we look at the latest State of AI report, which provides further confirmation that LLMs like Claude are being used by malicious actors—specifically suspected North Korean state actors—to "vibe hack" the hiring process. By using AI to create perfect-looking resumes and tailored interview responses, the traditional method of spotting phony candidates by poor text quality is no longer reliable.Episode Links:https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/https://www.anthropic.com/research/small-samples-poisonhttps://versprite.com/blog/watch-who-you-open-your-door-to-in-ai-times/https://excitech.substack.com/p/16-highlights-from-the-state-of-aihttps://www.stateof.ai/https://www.firetail.ai/blog/we-interviewed-north-korean-hacker-heres-what-learnedDiscover all of your Shadow AI now...Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform.

Visit the podcast's native language site