This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.
제공자:


이 강좌에 대하여
Python programming and experience with basic packages such as numpy, scipy and matplotlib
배울 내용
Program global explainability methods in time-series classification
Program local explainability methods for deep learning such as CAM and GRAD-CAM
Understand axiomatic attributions for deep learning networks
Incorporate attention in Recurrent Neural Networks and visualise the attention weights
귀하가 습득할 기술
- attention mechanisms
- explainable machine learning models
- model-agnostic and model specific models
- global and local explanations
- interpretability vs explainability
Python programming and experience with basic packages such as numpy, scipy and matplotlib
제공자:

University of Glasgow
The University of Glasgow has been changing the world since 1451. It is a world top 100 university (THE, QS) with one of the largest research bases in the UK.
강의 계획표 - 이 강좌에서 배울 내용
Interpretable vs Explainable Machine Learning Models in Healthcare
Deep learning models are complex and it is difficult to understand their decisions. Explainability methods aim to shed light to the deep learning decisions and enhance trust, avoid mistakes and ensure ethical use of AI. Explanations can be categorised as global, local, model-agnostic and model-specific. Permutation feature importance is a global, model agnostic explainabillity method that provide information with relation to which input variables are more related to the output.
Local Explainability Methods for Deep Learning Models
Local explainability methods provide explanations on how the model reach a specific decision. LIME approximates the model locally with a simpler, interpretable model. SHAP expands on this and it is also designed to address multi-collinearity of the input features. Both LIME and SHAP are local, model-agnostic explanations. On the other hand, CAM is a class-discriminative visualisation techniques, specifically designed to provide local explanations in deep neural networks.
Gradient-weighted Class Activation Mapping and Integrated Gradients
GRAD-CAM is an extension of CAM, which aims to a broader application of the architecture in deep neural networks. Although, it is one of the most popular methods in explaining deep neural network decisions, it violates key axiomatic properties, such as sensitivity and completeness. Integrated gradients is an axiomatic attribution method that aims to cover this gap.
Attention mechanisms in Deep Learning
Attention in deep neural networks mimics human attention that allocates computational resources to a small range of sensory input in order to process specific information with limited processing power. In this week, we discuss how to incorporate attention in Recurrent Neural Networks and autoencoders. Furthermore, we visualise attention weights in order to provide a form of inherent explanation for the decision making process.
Informed Clinical Decision Making using Deep Learning 특화 과정 정보
This specialisation is for learners with experience in programming that are interested in expanding their skills in applying deep learning in Electronic Health Records and with a focus on how to translate their models into Clinical Decision Support Systems.

자주 묻는 질문
강의 및 과제를 언제 이용할 수 있게 되나요?
이 전문 분야를 구독하면 무엇을 이용할 수 있나요?
재정 지원을 받을 수 있나요?
궁금한 점이 더 있으신가요? 학습자 도움말 센터를 방문해 보세요.