-
Updated
May 29, 2020 - PHP
cross-validation
Here are 394 public repositories matching this topic...
-
Updated
Jan 13, 2019 - Python
-
Updated
Oct 8, 2019 - Jupyter Notebook
This issue documents the way to use this package for Nested Cross-Validation. If you have any question, welcome to comment below.
Flat cross-validation vs. nested cross-validation
To clarify the meaning of these two terms in this specific issue, let me first describe them.
Flat cross-validation
Let us use 5-Fold as an example. In a 5-Fold flat cross-validation, you split the dataset
-
Updated
Apr 5, 2020 - Jupyter Notebook
This came up when I was trying to use the weights.psis function to actually make an importance sampler. The weights that are returned are not the log-PSIS weights, but rather the log-PSIS weights minus the largest log ratio. @avehtari documented the solution here. https://discourse.mc-stan.org/t/trouble-using-loo-psis-for-importance-sampling/9209/2
This is not super urgent, but it would
Hi,
I was wondering if there's a mistake in the FAQ section of the documentation ("Should I apply ICA first or autoreject first?").
According to the MNE docs the reject parameter in ica.fit() only applies to an instance of Raw, not Epochs.
However, here ica.fit() is used with reject on epochs several times, e.g.:
>>> reject = get_rejection_threshold(epochs)
>>> ica.fit(epo
not just compare different feature sets with a fixed builtin model, but users can also input a model of their own choice. it does not limit exploration of new models or pipelines - they can use implementation of best practices while evaluating the such new models on features of their choice
xgboost API change
-
Updated
Apr 19, 2019 - Python
-
Updated
May 29, 2020 - Python
-
Updated
Mar 20, 2019 - JavaScript
-
Updated
May 20, 2020 - Python
-
Updated
Aug 1, 2017 - R
-
Updated
Mar 7, 2020 - Python
-
Updated
Apr 17, 2019 - HTML
-
Updated
Jul 22, 2018 - R
-
Updated
Apr 16, 2018 - Python
-
Updated
Mar 30, 2020 - Jupyter Notebook
-
Updated
Oct 7, 2017 - Jupyter Notebook
-
Updated
May 30, 2020 - R
-
Updated
Aug 14, 2019 - JavaScript
-
Updated
Jan 12, 2018 - Jupyter Notebook
-
Updated
Nov 5, 2017 - Python
-
Updated
Apr 22, 2017 - Python
-
Updated
Oct 3, 2017 - Jupyter Notebook
-
Updated
Mar 31, 2020 - Jupyter Notebook
-
Updated
Mar 26, 2019 - R
Improve this page
Add a description, image, and links to the cross-validation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cross-validation topic, visit your repo's landing page and select "manage topics."
In Python XGBoost one can provide weights for each row of the data, see http://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier.fit. I tried to look for a way to specify such weights in SharpLearning, but could not find it. Is this possible?