hyperparameter-optimization
Here are 515 public repositories matching this topic...
-
Updated
Nov 24, 2021 - Python
-
Updated
Jul 1, 2021 - Python
From issue #1302, it appears autosklearn is a bit unstable when run many times in the same script, i.e. in a for loop.
for i in range(400):
automodel = AutoSklearn(full_resources)
automodel.fit(x, y)We currently have no test for this and it would be good to see if we can reproduce the same connection refused error.
Motivation
Since lightgbm v3.3.0, train/cv's arguments have been deprecated as described in optuna/optuna#3013. It would be great to replace such deprecated arguments with recommended way by lightgbm.
Description
Replace the deprecated arguments with callback-ish way. The ta
When running TabularPredictor.fit(), I encounter a BrokenPipeError for some reason.
What is causing this?
Could it be due to OOM error?
Fitting model: XGBoost ...
-34.1179 = Validation root_mean_squared_error score
10.58s = Training runtime
0.03s = Validation runtime
Fitting model: NeuralNetMXNet ...
-34.2849 = Validation root_mean_squared_error score
43.63s =
-
Updated
Sep 10, 2021
-
Updated
Nov 20, 2021 - Python
-
Updated
Nov 19, 2021 - Python
Details in discussion mljar/mljar-supervised#421
Here's the code:
settings = {
"time_budget": 360,
"metric": 'ap',
"task": 'classification',
"log_file_name": f'{output_dir}/flaml1.log',
"seed": 7654321,
"log_training_metric": True,
"groups": group_id,
"estimator_list": ['catboost']
}
automl.fit(X_train=X_train, y_train=y_train, **settings)Here's the output:
-
Updated
Feb 10, 2021 - Python
-
Updated
Nov 24, 2021 - Python
-
Updated
May 27, 2021 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Nov 24, 2021 - Python
-
Updated
Feb 6, 2021 - Python
Grid search variant
I think it would be useful to have a grid search optimizer in this package. But its implementation would probably be quite different from other ones (sklearn, ...).
The requirements are:
- The grid search has to stop after n_iter instead of searching the entire search space
- The positions should not be precalculated at the beginning of the optimization (i have concerns about memory load).
-
Updated
Aug 19, 2021 - Jupyter Notebook
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
Nov 2, 2021 - Jupyter Notebook
-
Updated
Jan 20, 2021 - Python
-
Updated
Nov 18, 2021 - Python
-
Updated
Aug 15, 2018 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
-
Updated
Jan 31, 2021 - Python
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
-
Updated
Jan 29, 2018 - Python
-
Updated
Nov 18, 2021 - Python
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
Search before asking
Description
We have introduced
detachedflag to indicate the lifetime of the actor in Python as following.But we have not introduced it for J