We ❤️ Open Source
A community education resource
How attackers manipulate AI models: Practical lessons in AI security
Defending AI models from adversarial inputs, data poisoning, and other security threats.
In “Hacking AI – How to survive the AI uprising,” Gant Laborde explores how attackers manipulate AI models through techniques like adversarial image perturbation, data poisoning, and model inversion. Using real-world examples, Gant highlights the risks and vulnerabilities of AI systems, from self-driving cars to facial recognition. The talk offers practical strategies for developers to secure their models against malicious inputs and adversarial agents. Learn how to defend AI systems and make them more resilient in an increasingly hostile digital landscape.
Read more: Why AI won’t replace developers
Presentation summary
“Hacking AI – How to survive the AI uprising,” by Gant Laborde, highlights how vulnerable AI systems are to attacks that exploit their inputs—whether audio, visual, or data-based. From inaudible “dolphin attacks” that can open garage doors via smart assistants, to image rescaling tricks that cause models to see one thing while humans see another, these vulnerabilities are practical, reproducible, and already being used.
Adversarial image manipulation, model inversion, and input perturbation demonstrate how attackers can confuse, extract from, or deceive AI systems—sometimes with just a few pixels or subtle audio tweaks. Examples include misclassifying daisies as vases or using pixel-level noise to fool object detection systems into thinking a stop sign doesn’t exist. These issues raise urgent concerns around safety, especially as AI powers more real-world decisions.
Rather than fearmongering, Gant urges developers to get informed and involved. Most attacks exploit predictable model behavior and training assumptions. If developers understand how these attacks work, they can start defending against them—whether by improving datasets, using robust training techniques, or simply asking better questions about how their systems might break.
Key takeaways
- AI systems can be manipulated with surprisingly simple techniques, many of which are already documented and open source.
- Security and robustness must be part of model design, not an afterthought once performance benchmarks are met.
- Developers have a responsibility to understand how their systems can fail, especially when AI is deployed in safety-critical or user-facing applications.
Conclusion
This talk is a wake-up call: AI isn’t just software—it’s an attack surface. As developers, we need to move beyond the hype and build with a security mindset. The tools exist. The threats are real. Now’s the time to get smarter about how we use and protect AI.
More from We Love Open Source
- What is Artificial Intelligence?
- How AI will change frontend development in 2025
- The best programming languages to learn first
- Why AI won’t replace developers
- What is OpenTelemetry?
The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.