Machine learning
Machine learning is the practice of teaching a computer to learn. The concept uses pattern recognition, as well as other forms of predictive algorithms, to make judgments on incoming data. This field is closely related to artificial intelligence and computational statistics.
Here are 38,904 public repositories matching this topic...
Some lines in the code block of the keras docs is too long, the result of which is, there will be a horizonal scroll bar at the bottom of the code block. That is hard to read. The long lines should be rearranged to multiple short lines to improve readibility.
Example:
The docs for the SimpleRNN class (https://keras.io/layers/recurrent/#simplernn). The initializer of SimpleRNN has m
In the IterativeImputer, min_value and max_value are defaulted to None. Internally, if they are None min and max value will be affected to -np.inf and np.inf, respectively.
We should change this behaviour and make that the default of min_value=-np.inf and max_value=np.inf directly.
line 123@
X, Y = read_images(DATASET_PATH, MODE, batch_size)
line 66@
classes = sorted(os.walk(dataset_path).next()[1])
StopIteration
Is there a way Tensorflow git cloned repositories can run without overhead issues?
🐛 Bug
To Reproduce
Run following from jupyter lab console
import torch
foo = torch.arange(5)
foo.as_strided((5,), (-1,), storage_offset=4)Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/daniil/.local/lib/python3.6/site-packages/torch/tensor.py", line 159, i
Current Behavior:
The the wiki page APIExample, for the python example, the handle api is is run through the TessBaseAPIDelete funciton if the api failed to be initialized whereas for the C example below, this is not the case.
python:
rc = tesseract.TessBaseAPIInit3(api, TESSDATA_PREFIX, lang)
if (rc):
te-
Updated
Feb 21, 2020 - Python
can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn
Describe the bug
can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn
To Reproduce
Steps to reproduce the behavior:
- Day1
- Step 5: Splitting the datasets into training sets and Test sets
- Can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn**
**Desktop (please complete the following infor
I think we can use either Clang.jl or a simple compile time C script to generate all the values in SuiteSparse_wrapper.c and build a .jl file of constants.
Can address JuliaLang/julia#20985 too.
I think we can also then try and bring back simultaneous support for 32 and 64-bit suitesparse.
-
Updated
Feb 21, 2020 - Jupyter Notebook
-
Updated
Feb 21, 2020
This should really help to keep a track of papers read so far. I would love to fork the repo and keep on checking the boxes in my local fork.
For example: Have a look at this section. People fork this repo and check the boxes as they finish reading each section.
Sir, could you please specify the difference between estimator and predictors in scikit learn which u elaborated in your book on page no. 61-62
The current installation doc (https://xgboost.readthedocs.io/en/latest/build.html) has accumulated lots of sections over time. It's time to simplify it. I want to make it look simple and easy to use.
Good examples:
- RAPIDS: https://rapids.ai/start.html

"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of D3.js, but also deliver this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easi
-
Updated
Feb 21, 2020 - Jupyter Notebook
What's the ETA for updating the massively outdated documentation?
Please update all documents that are related building CNTK from source with latest CUDA dependencies that are indicated in CNTK.Common.props and CNTK.Cpp.props.
I tried to build from source, but it's a futile effort.
-
Updated
Feb 21, 2020 - Python
I am having difficulty in running this package as a Webservice. Would appreciate if we could provide any kind of documentation on implementing an API to get the keypoints from an image. Our aim is to able to deploy this API as an Azure Function and also know if it is feasible.
I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration
If you have questions about the projects I suggested,
-
Updated
Feb 21, 2020 - Python
-
Updated
Feb 21, 2020 - Jupyter Notebook
-
Updated
Feb 21, 2020
Transformer-XL reports scores on Penn, and WikiText-103, not WikiText-2.
The 54.52 score on WikiText-2 is incorrect and should be moved to the Penn table.
-
Updated
Feb 21, 2020
README upgrade
I recently added "back to top" button to README. What other features would make it easier to browse? Please write your recommendation.
-
Updated
Feb 21, 2020
- Wikipedia
- Wikipedia
tf.functionmakes invalid assumptions about arguments that areMappinginstances. In general, there are no requirements forMappinginstances to have constructors that accept[(key, value)]initializers, as assumed here.This leads to cryptic exceptions when used with perfectly valid
Mappings