interpretability
Here are 173 public repositories matching this topic...
inspired by #101
Once we merge in new robustness transforms, crop_to_size should probably be in the default transforms. We still want to allow not passing it, but should probably warn in that case.
-
Updated
Jul 10, 2020
When running
from interpret import show
from interpret.perf import ROC
blackbox_perf = ROC(blackbox_model.predict_proba).explain_perf(X_test, y_test, name='Blackbox')
show(blackbox_perf)
I have the following error
RuntimeError: Could not find open port.
Consider calling interpret.set_show_addr(("127.0.0.1", 7001)) first..
Even calling the set_show_addr, I
- It would be nice to have a list of current contributors and update this list as more people add resources to this repo.
-
Updated
Jul 11, 2020 - Python
Description
Currently our unit tests are disorganized and each test creates example StellarGraph graphs in different or similar ways with no sharing of this code.
This issue is to improve the unit tests by making functions to create example graphs available to all unit tests by, for example, making them pytest fixtures at the top level of the tests (see https://docs.pytest.org/en/latest/
-
Updated
Mar 3, 2017 - Lua
When building the docs locally, API documentation is available. However, it doesn't appear when deployed on ReadTheDocs
General:
- remove outdated examples from
DALEX_docs - prepare skeleton for R/Python docs
R specific:
- prepare Introductory materials to predictive models for
titanicandapartments - prepare Introductory materials to
explain() - prepare Introductory materials to
predict_parts() - prepare Introductory materials to
predict_profile() - prepar
nowadays, docs with rarely interpretation is difficult to understand the algorithm,
Such as bayes rule sets and other Underdogs have little references.
If can provide a common introduce in interface level may be good for programmers
who have little information of algorithm to start
-
Updated
Jun 4, 2020 - Jupyter Notebook
-
Updated
May 29, 2020 - Python
-
Updated
Oct 5, 2019 - Python
-
Updated
Jul 6, 2020 - Python
In order to successfully install examples using Docker I did the following changes:
- There seems to be missing step which clones
mli-resourcesGitHub repository. PerhapsRUN git clone https://github.com/h2oai/mli-resources.gitshould be added toDockerfile(I cloned repo manually). - Jupyter refuses to start under root - consider adding
--allow-rootparameter: `docker run -i -t -p 888
-
Updated
Jul 5, 2020 - R
-
Updated
Jun 9, 2020 - Python
-
Updated
Jun 17, 2020 - Python
-
Updated
May 28, 2020
-
Updated
Jul 11, 2020 - Python
-
Updated
Apr 24, 2020 - Python
modelStudio FAQ & Troubleshooting
- Error occurred during the
modelStudio()computation fooplot doesn't show up on the dashboard
- Read the console output of
DALEX::explain(). There could be a warning message pointing to the
-
Updated
Feb 27, 2020 - Jupyter Notebook
-
Updated
May 29, 2020 - Jupyter Notebook
-
Updated
Jan 22, 2020 - Python
-
Updated
Apr 4, 2020 - R
-
Updated
Jan 20, 2020 - Jupyter Notebook
-
Updated
Jan 27, 2020 - Python
Improve this page
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."
I don't see any documentation on how to sort the variables in the plot. I don't want them sorted by shap value but in the original order, since i'm studying the behaviour of a Wave and I want to see a multiclass distribution of shap values over time
Currently they appear as
t=24
t=32
t=1
....
I want them to appear as
t=1
t=2
t=3
t=4
...
Is it possible?