Karol Przystalski
Explainable machine learning explained
#1about 2 minutes
The growing importance of explainable AI in modern systems
Machine learning has become widespread, creating a critical need to understand how models make decisions beyond simple accuracy metrics.
#2about 4 minutes
Why regulated industries like medtech and fintech require explainability
In fields like medicine and finance, regulatory compliance and user trust make it mandatory to explain how AI models arrive at their conclusions.
#3about 3 minutes
Identifying the key stakeholders who need model explanations
Explainability is crucial for various roles, including domain experts like doctors, regulatory agencies, business leaders, data scientists, and end-users.
#4about 4 minutes
Fundamental approaches for explaining AI model behavior
Models can be explained through various methods such as mathematical formulas, visual charts, local examples, simplification, and analyzing feature relevance.
#5about 5 minutes
Learning from classic machine learning model failures
Examining famous failures, like the husky vs. wolf classification and the Tay chatbot, reveals how models can learn incorrect patterns from biased data.
#6about 5 minutes
Differentiating between white-box and black-box models
White-box models like decision trees are inherently transparent, whereas black-box models like neural networks require special techniques to interpret their internal workings.
#7about 7 minutes
Improving model performance with data-centric feature engineering
A data-centric approach, demonstrated with the Titanic dataset, shows how creating new features from existing data can significantly boost model accuracy.
#8about 4 minutes
Exploring inherently interpretable white-box models
Models such as logistic regression, k-means, decision trees, and SVMs are considered explainable by design due to their transparent decision-making processes.
#9about 5 minutes
Using methods like LIME and SHAP to explain black-box models
Techniques like Partial Dependence Plots (PDP), LIME, and SHAP are used to understand the influence of features on the predictions of complex black-box models.
#10about 3 minutes
Visualizing deep learning decisions in images with Grad-CAM
Grad-CAM (Gradient-weighted Class Activation Mapping) creates heatmaps to highlight which parts of an image were most influential for a deep neural network's classification.
#11about 3 minutes
Understanding security risks from adversarial attacks on models
Adversarial attacks demonstrate how small, often imperceptible, changes to input data can cause machine learning models to make completely wrong predictions.
Related jobs
Jobs that call for the skills explored in this talk.
Picnic Technologies B.V.
Amsterdam, Netherlands
Intermediate
Senior
Python
Structured Query Language (SQL)
+1
WALTER GROUP
Wiener Neudorf, Austria
Intermediate
Senior
Python
Data Vizualization
+1
Wilken GmbH
Ulm, Germany
Senior
Kubernetes
AI Frameworks
+3
Matching moments
14:06 MIN
Exploring the role and ethics of AI in gaming
Devs vs. Marketers, COBOL and Copilot, Make Live Coding Easy and more - The Best of LIVE 2025 - Part 3
03:07 MIN
Final advice for developers adapting to AI
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
02:20 MIN
The evolving role of the machine learning engineer
AI in the Open and in Browsers - Tarek Ziadé
03:28 MIN
Why corporate AI adoption lags behind the hype
What 2025 Taught Us: A Year-End Special with Hung Lee
08:29 MIN
How AI threatens the open source documentation business model
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
09:10 MIN
How AI is changing the freelance developer experience
WeAreDevelopers LIVE – AI, Freelancing, Keeping Up with Tech and More
01:02 MIN
AI lawsuits, code flagging, and self-driving subscriptions
Fake or News: Self-Driving Cars on Subscription, Crypto Attacks Rising and Working While You Sleep - Théodore Lefèvre
04:57 MIN
Increasing the value of talk recordings post-event
Cat Herding with Lions and Tigers - Christian Heilmann
Featured Partners
Related Videos
Model Governance and Explainable AI as tools for legal compliance and risk management
Kilian Kluge & Isabel Bär
How AI Models Get Smarter
Ankit Patel
Panel discussion: Developing in an AI world - are we all demoted to reviewers? WeAreDevelopers WebDev & AI Day March2025
Laurie Voss, Rey Bango, Hannah Foxwell, Rizel Scarlett & Thomas Steiner
AI & Ethics
PJ Hagerty
The pitfalls of Deep Learning - When Neural Networks are not the solution
Adrian Spataru & Bohdan Andrusyak
Bringing the power of AI to your application.
Krzysztof Cieślak
How Machine Learning is turning the Automotive Industry upside down
Jan Zawadzki
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


Agenda GmbH
Remote
Intermediate
API
Azure
Python
Docker
+10

Agenda GmbH
Raubling, Germany
Remote
Intermediate
API
Azure
Python
Docker
+10



Zaizi LTD
London
Contract
Published: 19 hours ago
Up to £75,000 a year
Charing Cross, United Kingdom
Remote
£75K
Senior
Machine Learning
Continuous Delivery
Continuous Integration

score4more GmbH
Berlin, Germany
Remote
Intermediate
API
Scrum
React
DevOps
+8

