David vonThenen
Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity
#1about 1 minute
The importance of explainable AI and data quality
AI models are only as good as their training data, which is often plagued by bias, noise, and inaccuracies that explainable AI helps to uncover.
#2about 3 minutes
Identifying common data inconsistencies in AI models
Models can be compromised by issues like annotation errors, data imbalance, and adversarial samples, which can be measured with tools like Captum.
#3about 2 minutes
The dual purpose of adversarial AI attacks
Intentionally introducing adversarial inputs can be used for good to test model boundaries, or for bad to obfuscate data and protect personal privacy.
#4about 3 minutes
How to confuse NLP models with creative inputs
Natural language processing models can be disrupted using techniques like encoding, code-switching, misspellings, and even metaphors to prevent accurate interpretation.
#5about 4 minutes
Visualizing model predictions with the Captum library
The Captum library for PyTorch helps visualize which parts of an input, like words in a sentence or pixels in an image, contribute most to a model's final prediction.
#6about 6 minutes
Manipulating model outputs with subtle input changes
Simple misspellings can flip a sentiment analysis result from positive to negative, and adding a single pixel can cause an image classifier to misidentify a cat as a dog.
#7about 2 minutes
Using an adversarial pattern t-shirt to evade detection
A t-shirt printed with a specific adversarial pattern can disrupt a real-time person detection model, effectively making the wearer invisible to the AI system.
#8about 2 minutes
Techniques for defending models against adversarial attacks
Defenses against NLP attacks include normalization and grammar checks, while vision attacks can be mitigated with image blurring, bit-depth reduction, or advanced methods like FGSM.
#9about 2 minutes
Defeating a single-pixel attack with image blurring
Applying a simple Gaussian blur to an image containing an adversarial pixel smooths out the manipulation, allowing the model to correctly classify the image.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
03:05 MIN
Understanding security risks from adversarial attacks on models
Explainable machine learning explained
04:38 MIN
Fundamental AI vulnerabilities and malicious misuse
A hundred ways to wreck your AI - the (in)security of machine learning systems
06:27 MIN
Deconstructing AI attacks from evasion to model stealing
A hundred ways to wreck your AI - the (in)security of machine learning systems
09:15 MIN
Navigating the new landscape of AI and cybersecurity
From Monolith Tinkering to Modern Software Development
02:03 MIN
Q&A on creating patterns and de-poisoning images
Hacking AI - how attackers impose their will on AI
02:28 MIN
Understanding the core principles of hacking AI systems
Hacking AI - how attackers impose their will on AI
03:43 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
05:17 MIN
Manipulating AI with prompt injection and hidden commands
WeAreDevelopers LIVE - Is Software Ever Truly Accessible?
Featured Partners
Related Videos
Hacking AI - how attackers impose their will on AI
Mirko Ross
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
The AI Elections: How Technology Could Shape Public Sentiment
Martin Förtsch & Thomas Endres
A hundred ways to wreck your AI - the (in)security of machine learning systems
Balázs Kiss
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Skynet wants your Passwords! The Role of AI in Automating Social Engineering
Wolfgang Ettlinger & Alexander Hurbean
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Prompt Injection, Poisoning & More: The Dark Side of LLMs
Keno Dreßel
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.




Association Bernard Gregory
Canton de Nancy-2, France
Data analysis
Machine Learning

Infosupport
Veenendaal, Netherlands
€0K
Natural Language Processing
![Phd Position On "human-centered Design And Evaluation Of Learning Analytics And Ai Tools In Edu[...]](https://wearedevelopers-develop.imgix.net/develop/public/default-job-listing-cover.png?w=400&ar=3.55&fit=crop&crop=entropy&auto=compress,format)
Phd Position On "human-centered Design And Evaluation Of Learning Analytics And Ai Tools In Edu[...]
Universidad De Valladolid
Municipality of Valladolid, Spain
€17K
Data analysis
Machine Learning

Expleo
Canton de Montigny-le-Bretonneux, France


NLP People
Municipality of Valencia, Spain
Intermediate
NumPy
Keras
Pandas
PyTorch
Tensorflow
+3