Karol Przystalski
Explainable machine learning explained
#1about 2 minutes
The growing importance of explainable AI in modern systems
Machine learning has become widespread, creating a critical need to understand how models make decisions beyond simple accuracy metrics.
#2about 4 minutes
Why regulated industries like medtech and fintech require explainability
In fields like medicine and finance, regulatory compliance and user trust make it mandatory to explain how AI models arrive at their conclusions.
#3about 3 minutes
Identifying the key stakeholders who need model explanations
Explainability is crucial for various roles, including domain experts like doctors, regulatory agencies, business leaders, data scientists, and end-users.
#4about 4 minutes
Fundamental approaches for explaining AI model behavior
Models can be explained through various methods such as mathematical formulas, visual charts, local examples, simplification, and analyzing feature relevance.
#5about 5 minutes
Learning from classic machine learning model failures
Examining famous failures, like the husky vs. wolf classification and the Tay chatbot, reveals how models can learn incorrect patterns from biased data.
#6about 5 minutes
Differentiating between white-box and black-box models
White-box models like decision trees are inherently transparent, whereas black-box models like neural networks require special techniques to interpret their internal workings.
#7about 7 minutes
Improving model performance with data-centric feature engineering
A data-centric approach, demonstrated with the Titanic dataset, shows how creating new features from existing data can significantly boost model accuracy.
#8about 4 minutes
Exploring inherently interpretable white-box models
Models such as logistic regression, k-means, decision trees, and SVMs are considered explainable by design due to their transparent decision-making processes.
#9about 5 minutes
Using methods like LIME and SHAP to explain black-box models
Techniques like Partial Dependence Plots (PDP), LIME, and SHAP are used to understand the influence of features on the predictions of complex black-box models.
#10about 3 minutes
Visualizing deep learning decisions in images with Grad-CAM
Grad-CAM (Gradient-weighted Class Activation Mapping) creates heatmaps to highlight which parts of an image were most influential for a deep neural network's classification.
#11about 3 minutes
Understanding security risks from adversarial attacks on models
Adversarial attacks demonstrate how small, often imperceptible, changes to input data can cause machine learning models to make completely wrong predictions.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
02:19 MIN
Using explainable AI to understand black box models
Solving the puzzle: Leveraging machine learning for effective root cause analysis
01:27 MIN
The importance of explainable AI and data quality
Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity
02:38 MIN
The four core principles of explainable AI
Model Governance and Explainable AI as tools for legal compliance and risk management
05:04 MIN
Q&A on model reliability and explainable AI
Getting Started with Machine Learning
04:21 MIN
Visualizing AI patterns to make them accessible
The shadows of reasoning – new design paradigms for a gen AI world
01:24 MIN
Auditing AI systems using MLOps and explainability
Model Governance and Explainable AI as tools for legal compliance and risk management
04:53 MIN
Techniques for model interpretability and transparency
Augmented Intelligence for transport planning: Human in the Loop Modelling
03:50 MIN
Differentiating between model interpretability and explainability
Model Governance and Explainable AI as tools for legal compliance and risk management
Featured Partners
Related Videos
Model Governance and Explainable AI as tools for legal compliance and risk management
Kilian Kluge & Isabel Bär
The pitfalls of Deep Learning - When Neural Networks are not the solution
Adrian Spataru & Bohdan Andrusyak
AI & Ethics
PJ Hagerty
How Machine Learning is turning the Automotive Industry upside down
Jan Zawadzki
Detecting Money Laundering with AI
Stefan Donsa & Lukas Alber
Staying Safe in the AI Future
Cassie Kozyrkov
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
How AI Models Get Smarter
Ankit Patel
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

Understanding Recruitment Group
Barcelona, Spain
Remote
Node.js
Computer Vision
Machine Learning
![Phd Position On "human-centered Design And Evaluation Of Learning Analytics And Ai Tools In Edu[...]](https://wearedevelopers-develop.imgix.net/develop/public/default-job-listing-cover.png?w=400&ar=3.55&fit=crop&crop=entropy&auto=compress,format)
Phd Position On "human-centered Design And Evaluation Of Learning Analytics And Ai Tools In Edu[...]
Universidad De Valladolid
Municipality of Valladolid, Spain
€17K
Data analysis
Machine Learning


Squarepoint Capital
London, United Kingdom
Intermediate
gRPC
Linux
Kotlin
PyTorch
PostgreSQL
+3



BMW
GIT
OpenCL
Docker
PyTorch
Tensorflow
+2

Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Berlin, Germany
GIT
MySQL
Keras
DevOps
Docker
+4

NeXaT GmbH
Bersenbrück, Germany
PyTorch
Tensorflow
Machine Learning