-
Updated
Mar 23, 2022 - Jupyter Notebook
shap
Here are 129 public repositories matching this topic...
-
Updated
Mar 18, 2022 - Jupyter Notebook
Recently dtreeviz has added support for lightgbm models: https://github.com/parrt/dtreeviz/
So it would be cool to add the same tree visualization in explainerdashboard that already exist for RandomForest, ExtraTrees and XGBoost models.
If someone wants to pitch in to help extract individual predictions of decision trees inside lightgbm boosters and then get them in shape to be used by the
-
Updated
Jan 16, 2022 - Jupyter Notebook
-
Updated
Jul 21, 2021 - Python
Catboost has a little bit different api and currently EarlyStoppingShapRFECV throw a error, if I tried to use it:
TypeError: fit() got an unexpected keyword argument 'eval_metric'
-
Updated
Feb 21, 2022 - Julia
-
Updated
Mar 12, 2022 - Jupyter Notebook
-
Updated
Jun 9, 2020 - R
-
Updated
Oct 15, 2021 - Python
-
Updated
May 11, 2021 - Jupyter Notebook
-
Updated
Feb 12, 2020 - Jupyter Notebook
-
Updated
Jul 28, 2021 - Jupyter Notebook
-
Updated
Jan 15, 2020 - Python
-
Updated
Mar 21, 2022 - Python
-
Updated
Sep 5, 2019 - Jupyter Notebook
-
Updated
Oct 8, 2019 - Jupyter Notebook
-
Updated
Aug 27, 2020 - Jupyter Notebook
Users don't know what chance they need to get into universities of their choice.
Minimum chance neede to get into some top universities.
Feature originally suggested by naveen_v on fastai forums
-
Updated
Aug 25, 2020 - Jupyter Notebook
-
Updated
Apr 27, 2020 - Jupyter Notebook
-
Updated
Dec 25, 2019 - Jupyter Notebook
-
Updated
Dec 30, 2021 - R
-
Updated
Jan 5, 2022 - Python
-
Updated
Sep 2, 2020 - Jupyter Notebook
-
Updated
Jul 20, 2021 - Python
-
Updated
Jul 20, 2021 - Jupyter Notebook
-
Updated
Jan 20, 2021 - Jupyter Notebook
-
Updated
Aug 24, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the shap topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the shap topic, visit your repo's landing page and select "manage topics."
When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L