hyperparameter-optimization
Here are 422 public repositories matching this topic...
I'm using mxnet to do some work, but there is nothing when I search the mxnet trial and example.
-
Updated
Mar 17, 2021 - Python
TimeSeries Split
The problem I want to use auto-sklearn on is a time-series. Can we modify sklearn to include cv with time series?
In some parts of Optuna, numpy.append is used. However, dynamically changing numpy.ndarray is slow, so for performance, it is better to append as a list first, and then cast with numpy.asarray. We are looking for someone to list and fix the parts where numpy.append is used.
resuming training
How do i resume training for text classification?
-
Updated
Feb 20, 2021
-
Updated
Dec 22, 2020 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Mar 13, 2021 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Mar 26, 2021 - Go
In #332 we added support for Optuna. It can be used for tuning:
- Extra Trees
- Random Forest
- Xgboost
- LightGBM
- CatBoost
It will be nice to have a visualization of Optuna optimization. Optuna study results are saved as joblib in optuna dir.
-
Updated
Feb 6, 2021 - Python
Grid search variant
I think it would be useful to have a grid search optimizer in this package. But its implementation would probably be quite different from other ones (sklearn, ...).
The requirements are:
- The grid search has to stop after n_iter instead of searching the entire search space
- The positions should not be precalculated at the beginning of the optimization (i have concerns about memory load).
-
Updated
Oct 8, 2020
-
Updated
Jan 17, 2021 - JavaScript
-
Updated
Jan 20, 2021 - Python
-
Updated
Mar 15, 2021 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Oct 27, 2020 - Jupyter Notebook
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
-
Updated
Jan 31, 2021 - Python
-
Updated
Jan 29, 2018 - Python
Is your feature request related to a problem? Please describe.
With the current implementation, the _fit_transform_data_container function calls _fit_data_container() and then _transform_data_container(). With the right parameter, each of them can execute its handle_fit/handle_transform call in a parallel fashion.
Describe the solution you'd like
We should have an implementation of _f
-
Updated
Jun 7, 2018 - Python
-
Updated
Oct 18, 2020 - JavaScript
-
Updated
Jul 19, 2019
-
Updated
Mar 16, 2021 - Python
-
Updated
Dec 6, 2016 - Jupyter Notebook
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
Hi, is there an easy way to get the memory occupied by some object ref?
E.g.
ray.sizeof(ray.put(object))