This Week in AI Security - 13th November 2025

Modern Cyber with Jeremy Snyder - Een podcast door Jeremy Snyder

Podcast artwork

Categorieën:

In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem.Key stories include:Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.Traffic Analysis Leaks: New research demonstrates that observers can deduce the topics of a conversation in an LLM chatbot with high accuracy simply by analyzing the size and frequency of encrypted network packets (token volume), even without decrypting the data.Secret Sprawl: An analysis by Wiz found that several of the world's largest AI companies are leaking secrets and credentials in their public GitHub repositories, underscoring that the speed of AI development is leading to basic, repeatable security mistakes.Non-Deterministic LLMs: Research from Anthropic highlights that LLMs are non-deterministic and highly unreliable in describing their own internal reasoning processes, giving inconsistent responses even to minor prompt variations.The New AI VSS: The OWASp Foundation unveiled the AI Vulnerability Scoring System (AI VSS), a new framework to consistently classify and quantify the severity (on a 0-10 scale) of risks like prompt injection in LLMs, helping organizations make better risk-informed decisions.Episode Links:https://cybersecuritynews.com/promptflux-malware-using-gemini-api/https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html https://arstechnica.com/ai/2025/11/llms-show-a-highly-unreliable-capacity-to-describe-their-own-internal-processes/ https://futurism.com/artificial-intelligence/llm-robot-vacuum-existential-crisis https://www.scworld.com/resource/owasp-global-appsec-new-ai-vulnerability-scoring-system-unveiled https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ https://www.securityweek.com/many-forbes-ai-50-companies-leak-secrets-on-github/

Visit the podcast's native language site