Prompt Injection Attacks: Understanding and Mitigating Exploiting Risks in LLM-Powered Web Apps
2024-12-11 , Sala 150

Dive into the wild world of prompt injection attacks in LLM-powered web apps! As AI chatbots and assistants become ubiquitous, a new breed of security vulnerabilities emerges. In this hands-on adventure, we'll dissect vulnerable chatbots, craft sneaky exploits, and explore robust defense strategies.

Get ready to break (and fix) things as we create a simple LLM-powered app, then systematically exploit its weaknesses. You'll learn to identify common vulnerabilities, understand the anatomy of prompt injection attacks, and implement effective countermeasures. Perfect for devs who want to build safer AI-driven interfaces and stay ahead in the AI security game!

Creative developer Jorrik bridges the gap between design and functionality. With 8+ years in front-end, he's now exploring AI's potential. At Sopra Steria, he builds innovative web solutions and shares his tech journey through public speaking.