automl
Here are 627 public repositories matching this topic...
-
Updated
Jan 7, 2022 - Python
-
Updated
Jul 1, 2021 - Python
Feature Description
We want to enable the users to specify the value ranges for any argument in the blocks.
The following code example shows a typical use case.
The users can specify the number of units in a DenseBlock to be either 10 or 20.
Code Example
import auUpdate a component
The components part of our codebase was written sometime ago, with older sklearn versions and before python typing was production ready.
In general, some of these files need to be cleaned up. Mostly typing of parameters and functions, adding documentation a bout these parameters and finally double checking with scikit learn that there aren't some new or deprecated parameters we still use.
To
- With Featuretools 1.0.0 we add a dataframe to an EntitySet with the following:
es = ft.EntitySet('new_es')
es.add_dataframe(dataframe=orders_df,
dataframe_name='orders',
index='order_id',
time_index='order_date')
Improvement
- However, you could also change the EntitySet setter to add it with this approach:
es = ft.Ent
-
Updated
Jan 8, 2022 - Jupyter Notebook
-
Updated
Dec 15, 2021 - Jupyter Notebook
We need to increase the number of datasources that MindsDB supports. This task should add a new datasource for connecting to Redash. You can get more info on Redash docs.
If possible, please include a test for the datasource. You can check the example here.
Note: if you are familiar with another datasource tha
Could FeatureTools be implemented as an automated preprocessor to Autogluon, adding the ability to handle multi-entity problems (i.e. Data split across multiple normalised database tables)? So if you supply Autogluon with a list of Dataframes instead of a single Dataframe it would first invoke FeatureTools:
- take the multiple Dataframes (entities) and try to auto-infer the relationship betwee
-
Updated
Jan 3, 2021 - Python
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
-
Updated
Jan 3, 2022
Can we have an example of REST API calls in the documentation?
Examples with CURL, HTTPie or another client and the results would be better for newbies.
Thanks again for your good work.
-
Updated
Jan 15, 2021 - Python
-
Updated
Jan 3, 2022 - Python
-
Updated
Dec 17, 2021 - Python
Problem
Some of our transformers & estimators are not thoroughly tested or not tested at all.
Solution
Use OpTransformerSpec and OpEstimatorSpec base test specs to provide tests for all existing transformers & estimators.
-
Updated
Dec 11, 2021 - Python
-
Updated
Oct 22, 2019 - Python
This issue has been coming up when I use,
automl.predict_proba(input)
I am using the requirements.txt in venv. Shouldn't input have feature names?
This message did not used to come up and I don't know why.
Support Python 3.10
Python 3.10 has been released. We should test it. If all the dependencies support it, we should add it to CI.
Contact Details [Optional]
Describe the feature you'd like
Currently our CLI offers a way to install the python packages that are required for a given integration. However, some of our integrations also have system requirements that are necessary to make them work (graphviz, kubectl, etc. ).
All system requirements should be listed on an integration level, just
-
Updated
Feb 10, 2021 - Python
-
Updated
Jun 16, 2021 - Python
-
Updated
Nov 11, 2019 - Jupyter Notebook
-
Updated
Nov 23, 2021 - Python
-
Updated
Oct 25, 2021 - Python
Improve this page
Add a description, image, and links to the automl topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the automl topic, visit your repo's landing page and select "manage topics."
Problem: Currently
JsonLoggerCallback.handle_resultwill load in the entirety of the existing results, append the new result, and then rewrite the entire file. This may not scale when running long-running jobs or jobs with large results.https://github.com/ray-project/ray/blob/4e8f90aca20aa7bb87a4e84039889444824382ca/python/ray/train/callbacks/logging.py#L138-L142
Potential Fix: