Another AI Talk! But We Need to Talk About How Cybercriminals Actually (Ab)Use AI
Thomas Roccia
Okay, okay… I know. You have heard about AI everywhere. You initially laughed at ChatGPT writing poems for you. Maybe you used Claude Code and got amazed by the app it built in minutes. Maybe you use OpenClaw to manage your tasks, or Gemini to draft your emails.
You get my point. AI is everywhere now. We are giving trust to systems that are, by design, untrusted. We are massively increasing the attack surface of technologies powerful enough to make us faster, more productive, and more competitive… while also exposing our assets and most sensitive data to the outside world.
Cybercriminals know that.
They know AI helps them scale faster, automate operations, and make more money. But they also know they can exploit the windows you leave wide open.
In this talk, I don't want to talk about hype. We have heard enough. I want to talk about what is actually happening right now.
How a simple Markdown file can manipulate an AI agent. How poisoned MCP servers and malicious skills can compromise entire workflows. How prompt injection becomes a real intrusion vector once an agent has access to tools and actions. How attackers can abuse your own AI infrastructure to power underground services, leak sensitive data, or execute operations inside trusted environments.
Your AI agent might be the best productivity tool you have ever used… but it might also become the insider threat you invited into your environment.
Through this talk, I want to give you insight into what is already possible, expose current TTPs used against AI systems, and discuss our responsibility as a cybersecurity community to lay the secure foundation of our future AI systems.
The AI apocalypse might not be the one you think. It will probably not start with robots taking over the world.
It will start with an AI agent reading one malicious Markdown file and your SOC wondering why the database suddenly disappeared.