-
Updated
Apr 30, 2022 - Jupyter Notebook
#
explainability
Here are 157 public repositories matching this topic...
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
machine-learning
data-mining
awesome
deep-learning
awesome-list
interpretability
privacy-preserving
production-machine-learning
mlops
privacy-preserving-machine-learning
explainability
responsible-ai
machine-learning-operations
ml-ops
ml-operations
privacy-preserving-ml
large-scale-ml
production-ml
large-scale-machine-learning
-
Updated
May 2, 2022
Open
Interpret
5
good first issue
Good for newcomers
python
machine-learning
transparency
lime
interpretability
ethical-artificial-intelligence
explainable-ml
shap
explainability
-
Updated
Apr 22, 2022 - Jupyter Notebook
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
deep-learning
vit
bert
perturbation
attention-visualization
bert-model
explainability
attention-matrix
vision-transformer
transformer-interpretability
visualize-classifications
cvpr2021
-
Updated
Apr 4, 2022 - Jupyter Notebook
XAI - An eXplainability toolbox for machine learning
machine-learning
ai
evaluation
ml
artificial-intelligence
upsampling
bias
interpretability
feature-importance
explainable-ai
explainable-ml
xai
imbalance
downsampling
explainability
bias-evaluation
machine-learning-explainability
xai-library
-
Updated
Oct 30, 2021 - Python
Power Tools for AI Engineers With Deadlines
home-automation
data-science
time-series
collaboration
cybersecurity
cold-start
autonomous-vehicles
hacktoberfest
automl
avionics
human-in-the-loop
predictive-maintenance
ensemble-machine-learning
datascience-environment
explainability
industrial-iot
trustworthy-datascience
energy-optimization
-
Updated
May 3, 2022 - Jupyter Notebook
Visualization toolkit for neural networks in PyTorch! Demo -->
visualization
machine-learning
deep-learning
cnn
pytorch
neural-networks
interpretability
explainability
-
Updated
Jun 30, 2021 - HTML
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
pytorch
neural-networks
imagenet
image-classification
pretrained-models
decision-trees
cifar10
interpretability
pretrained-weights
cifar100
tiny-imagenet
explainability
neural-backed-decision-trees
-
Updated
Jun 3, 2021 - Python
nushib
commented
Jun 2, 2021
Suggested by Melanie Fernandez Pradier:
“Given a model trained on certain features, is there any way I can include additional features (not used in training) but that I want to monitor in the error analysis?"
This is currently possible by enriching the set of input features to the dashboard after the inference step. However, will need further support on the UI side to clearly mark features t
documentation
Improvements or additions to documentation
enhancement
New feature or request
good first issue
Good for newcomers
Error Analysis
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
visualization
transformers
transformer
vqa
clip
interpretability
explainable-ai
explainability
detr
lxmert
visualbert
-
Updated
Apr 20, 2022 - Jupyter Notebook
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
machine-learning
scikit-learn
transparency
blackbox
bias
interpretability
explainable-artificial-intelligence
interpretable-ai
explainable-ai
explainable-ml
xai
interpretable-machine-learning
machine-learning-interpretability
explainability
aws-sagemaker
explainx
-
Updated
Jan 7, 2022 - Jupyter Notebook
PyTorch implementation of Score-CAM
heatmap
grad-cam
pytorch
cam
saliency
class-activation-maps
cnn-visualization-technique
gradcam
gradient-free
cnn-visualization
visual-explanations
explainability
score-cam
scorecam
-
Updated
Dec 17, 2021 - Python
machine-learning
predictive-modeling
interactive-visualizations
interpretability
explainable-artificial-intelligence
explainable-ai
explainable-ml
xai
model-visualization
interpretable-machine-learning
iml
explainability
explanatory-model-analysis
explainable-machine-learning
-
Updated
Apr 21, 2022 - R
Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
-
Updated
Jul 26, 2021 - Python
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
python
benchmarking
benchmark
machine-learning
tensorflow
pytorch
artificial-intelligence
counterfactual
explainable-ai
explainable-ml
explainability
tensorflow2
counterfactual-explanations
counterfactuals
recourse
-
Updated
Apr 26, 2022 - Python
security
evaluations
attacks
interpretability
robustness
adversarial-machine-learning
adversarial-examples
adversarial-attacks
model-explanation
interpretable-deep-learning
interpretable-ai
explainable-ai
explainable-ml
xai
interpretable-machine-learning
iml
explainability
responsible-ai
adversarial-defense
adversarial-xai
-
Updated
Apr 27, 2022
For calculating global feature importance using Shapley values.
-
Updated
Sep 30, 2021 - Python
P-NET, Biologically informed deep neural network for prostate cancer classification and discovery
-
Updated
Nov 15, 2021 - HTML
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
python
data-science
machine-learning
statistics
deep-neural-networks
ai
deep-learning
neural-network
jupyter-notebook
ml
pytorch
artificial-intelligence
convolutional-neural-networks
acd
interpretation
iclr
interpretability
feature-importance
explainable-ai
explainability
-
Updated
Aug 25, 2021 - Jupyter Notebook
Papers about explainability of GNNs
-
Updated
Mar 14, 2022
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
-
Updated
Aug 22, 2020 - Python
Visualization tool for Graph Neural Networks
visualization
deep-learning
pytorch
graph-visualization
xai
graph-neural-networks
graph-representation-learning
explainability
dgl
-
Updated
Apr 18, 2022 - TypeScript
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
detection
lstm
offensive
bias
hatespeech
hate-speech
interpretable-deep-learning
attention-lstm
bert-model
explainability
bert-fine-tuning
-
Updated
Apr 12, 2022 - Python
Explainability techniques for Graph Networks, applied to a synthetic dataset and an organic chemistry task. Code for the workshop paper "Explainability Techniques for Graph Convolutional Networks" (ICML19)
-
Updated
Nov 12, 2019 - Jupyter Notebook
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
python
data-science
machine-learning
ai
deep-learning
neural-network
jupyter-notebook
ml
pytorch
artificial-intelligence
convolutional-neural-network
fairness
interpretability
cdep
feature-importance
recurrent-neural-network
interpretable-deep-learning
explainable-ai
explainability
fairness-ml
-
Updated
Mar 22, 2021 - Jupyter Notebook
Collection of NLP model explanations and accompanying analysis tools
natural-language-processing
transformers
datasets
heatmaps
interpretability
explainability
saliency-maps
feature-attribution
captum
explainable-nlp
-
Updated
Mar 31, 2022 - Jsonnet
Contextual AI adds explainability to different stages of machine learning pipelines - data, training, and inference - thereby addressing the trust gap between such ML systems and their users. It does not refer to a specific algorithm or ML method — instead, it takes a human-centric view and approach to AI.
-
Updated
Apr 22, 2022 - Jupyter Notebook
Amazon SageMaker Solution for explaining credit decisions.
machinelearning
financial-analysis
credit-scoring
explainable-ai
explainable-ml
sagemaker
loan-prediction-analysis
shapley
explainability
aws-sagemaker
-
Updated
Jun 2, 2021 - Python
TimeSHAP explains Recurrent Neural Network predictions.
-
Updated
May 3, 2022 - Jupyter Notebook
Improve this page
Add a description, image, and links to the explainability topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the explainability topic, visit your repo's landing page and select "manage topics."
Yes