Balázs Kiss
A hundred ways to wreck your AI - the (in)security of machine learning systems
#1about 4 minutes
The security risks of AI-generated code
AI systems can generate code quickly but may introduce vulnerabilities or rely on outdated practices, highlighting that all AI systems are fundamentally code and can be exploited.
#2about 5 minutes
Fundamental AI vulnerabilities and malicious misuse
AI systems are prone to classic failures like overfitting and can be maliciously manipulated through deepfakes, chatbot poisoning, and adversarial patterns.
#3about 1 minute
Exploring threat modeling frameworks for AI security
Several organizations like OWASP, NIST, and MITRE provide threat models and standards to help developers understand and mitigate AI security risks.
#4about 6 minutes
Deconstructing AI attacks from evasion to model stealing
Attack trees categorize novel threats like evasion with adversarial samples, data poisoning to create backdoors, and model stealing to replicate proprietary systems.
#5about 2 minutes
Demonstrating an adversarial attack on digit recognition
A live demonstration shows how pre-generated adversarial samples can trick a digit recognition model into misclassifying numbers as zero.
#6about 5 minutes
Analyzing supply chain and framework security risks
Security risks extend beyond the model to the supply chain, including backdoors in pre-trained models, insecure serialization formats like Pickle, and vulnerabilities in ML frameworks.
#7about 1 minute
Choosing secure alternatives to the Pickle model format
The HDF5 format is recommended as a safer, industry-standard alternative to Python's insecure Pickle format for serializing machine learning models.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
02:17 MIN
New security vulnerabilities and monitoring for AI systems
The State of GenAI & Machine Learning in 2025
09:15 MIN
Navigating the new landscape of AI and cybersecurity
From Monolith Tinkering to Modern Software Development
09:15 MIN
The complex relationship between AI and cybersecurity
Panel: How AI is changing the world of work
04:53 MIN
The dual nature of machine learning's power
Machine Learning: Promising, but Perilous
03:28 MIN
Understanding the fundamental security risks in AI models
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
03:35 MIN
Understanding AI security risks for developers
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
04:09 MIN
Understanding the current state of AI security challenges
Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
02:13 MIN
The security risks of AI agents and generated code
Five things in tech that matter and we have to make work
Featured Partners
Related Videos
Hacking AI - how attackers impose their will on AI
Mirko Ross
Machine Learning: Promising, but Perilous
Nura Kawa
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Staying Safe in the AI Future
Cassie Kozyrkov
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
Panel: How AI is changing the world of work
Pascal Reddig, TJ Griffiths, Fabian Schmidt, Oliver Winzenried & Matthias Niehoff & Mirko Ross
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.




Association Bernard Gregory
Canton de Nancy-2, France
Data analysis
Machine Learning

Understanding Recruitment Group
Barcelona, Spain
Remote
Node.js
Computer Vision
Machine Learning

Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Berlin, Germany
GIT
MySQL
Keras
DevOps
Docker
+4



Government of The United Kingdom