Here are
16 public repositories
matching this topic...
pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...
Updated
May 18, 2020
Python
Pytorch Implementation of recent visual attribution methods for model interpretability
Updated
Feb 27, 2020
Jupyter Notebook
Class Activation Map (CAM) Visualizations in PyTorch.
Updated
May 20, 2020
Python
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
A set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Overview of different model interpretability libraries.
Updated
Jul 30, 2020
Jupyter Notebook
Will They Pay? A machine learning solution to understand mobile app user payment behavior
Updated
Feb 10, 2020
Jupyter Notebook
Course project for 6.869: automatic summarization for neural net interpretability
Updated
Dec 12, 2018
Jupyter Notebook
A machine learning project developing classification models to predict COVID-19 diagnosis in paediatric patients.
Updated
Jul 21, 2020
Jupyter Notebook
Model interpretability for Explainable Artificial Intelligence
Updated
Dec 9, 2019
Jupyter Notebook
Code for "High-Precision Model-Agnostic Explanations" paper. A follow up to LIME model.
Updated
Dec 5, 2018
Jupyter Notebook
Visualizing an XGBoost model in R using a sunburst plot (using inTrees)
Using Captum library and interpreting the models to understand their decisions better
Updated
Jun 20, 2020
Jupyter Notebook
Using LIME and SHAP for model interpretability of Machine Learning Black-box models.
Updated
Sep 13, 2019
Jupyter Notebook
Interpretability and Fairness in Machine Learning
Updated
Aug 5, 2020
Jupyter Notebook
Improve this page
Add a description, image, and links to the
model-interpretability
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
model-interpretability
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.