hyperparameter-optimization
Here are 556 public repositories matching this topic...
-
Updated
Jan 30, 2022 - Python
-
Updated
Jul 1, 2021 - Python
It seems there is no validation on fit_ensemble when ensemble size is 0, causing an issue to appear as seen in #1327
Motivation
The following table originally created in optuna/optuna#3138 (comment) may suggest that we can reduce the number of test functions and simplify the test logic.
| show_progress_bar | n_trials | timeout | n_jobs | covered by |
|---|---|---|---|---|
| False | None | None | 1 | test_o |
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
Jan 31, 2022 - Python
-
Updated
Jan 3, 2022
-
Updated
Jan 19, 2022 - Python
-
Updated
Nov 19, 2021 - Python
This issue has been coming up when I use,
automl.predict_proba(input)
I am using the requirements.txt in venv. Shouldn't input have feature names?
This message did not used to come up and I don't know why.
In principle it seems getting the parameters from FLAML to C# LightGBM seems to work, but I dont have any metrics yet. The names of parameters are slightly different but documentation is adequate to match them. Microsoft.ML seems to have version 2.3.1 of LightGBM.
Another approach that might be useful, especially for anyone working with .NET, would be having some samples about conversion to ONN
-
Updated
Jan 30, 2022 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Jan 28, 2022 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Feb 6, 2021 - Python
-
Updated
Jan 28, 2022 - Jupyter Notebook
-
Updated
Jan 16, 2022 - Python
-
Updated
Aug 19, 2021 - Jupyter Notebook
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
Jan 20, 2021 - Python
-
Updated
Jan 27, 2022 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Jan 19, 2022 - Python
-
Updated
Jan 31, 2021 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
-
Updated
Jan 29, 2018 - Python
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
According to FastAPI's docs,
response_modelcan accept type annotations that are not pydantic models. However, the code referenced below is checking for the__fields__attribute, which won't be on type annotations such aslist[float], for example.https://github.com/ray-project/ray/blob/e60a5f52eb93c851b186cb78fa1f70d