Machine learning
Machine learning is the practice of teaching a computer to learn. The concept uses pattern recognition, as well as other forms of predictive algorithms, to make judgments on incoming data. This field is closely related to artificial intelligence and computational statistics.
Here are 38,844 public repositories matching this topic...
Please implement the latest examples in the examples/ folder to the docs/mkdocs.yml to see theses examples on the homepage. Perhaps a subsectioning for that many examples is necessary. See Homepage
Thank you!
scikit-learn/scikit-learn#16404 adds the Bunch object to the public API docs. However, the API docs do not reference Bunch where it's used.
It'd be nice to fix those and link to the now available Bunch page.
I think "outputs [-1]" and "outputs [0]" are equivalent (reversed) in this line of code, but the former (89%) works better than the latter (86%). Why?
🐛 Bug
To Reproduce
Run following from jupyter lab console
import torch
foo = torch.arange(5)
foo.as_strided((5,), (-1,), storage_offset=4)Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/daniil/.local/lib/python3.6/site-packages/torch/tensor.py", line 159, i
Current Behavior:
The the wiki page APIExample, for the python example, the handle api is is run through the TessBaseAPIDelete funciton if the api failed to be initialized whereas for the C example below, this is not the case.
python:
rc = tesseract.TessBaseAPIInit3(api, TESSDATA_PREFIX, lang)
if (rc):
teLine 1137 of the Caffe.Proto states "By default, SliceLayer concatenates blobs along the "channels" axis (1)."
Yet, the documentation on http://caffe.berkeleyvision.org/tutorial/layers/slice.html states, "The Slice layer is a utility layer that slices an input layer to multiple output layers along a given dimension (currently num or channel only) with given slice indices." which seems to be
I am attempting to run faceswap on an NVIDIA Jetson Nano board.
A few of the required faceswap dependencies didn't have pre-built binaries for this chip architecture so I manually compiled them, including:
- opencv-python==4.1.2.30
- numpy==1.17.4
The faceswap program runs ok and I can begin to train but it always eventuall
can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn
Describe the bug
can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn
To Reproduce
Steps to reproduce the behavior:
- Day1
- Step 5: Splitting the datasets into training sets and Test sets
- Can't find "from sklearn.cross_validation import train_test_split" in Latest version scikit-learn**
**Desktop (please complete the following infor
Currently as follows:
julia> abstract type Foo{S}; end
julia> struct Bar <: Foo; end
ERROR: invalid subtyping in definition of Bar
Stacktrace:
[1] top-level scope at REPL[2]:1
Ideally it would at least tell you you forgot a type parameter, and maybe if it's extra nice show you the signature of the thing you're trying to subtype to show you what type parameters it has.
-
Updated
Feb 20, 2020 - Jupyter Notebook
-
Updated
Feb 20, 2020
This should really help to keep a track of papers read so far. I would love to fork the repo and keep on checking the boxes in my local fork.
For example: Have a look at this section. People fork this repo and check the boxes as they finish reading each section.
Sir, could you please specify the difference between estimator and predictors in scikit learn which u elaborated in your book on page no. 61-62
The current installation doc (https://xgboost.readthedocs.io/en/latest/build.html) has accumulated lots of sections over time. It's time to simplify it. I want to make it look simple and easy to use.
Good examples:
- RAPIDS: https://rapids.ai/start.html

"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of D3.js, but also deliver this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easi
-
Updated
Feb 19, 2020 - Jupyter Notebook
What's the ETA for updating the massively outdated documentation?
Please update all documents that are related building CNTK from source with latest CUDA dependencies that are indicated in CNTK.Common.props and CNTK.Cpp.props.
I tried to build from source, but it's a futile effort.
-
Updated
Feb 20, 2020 - Python
I am having difficulty in running this package as a Webservice. Would appreciate if we could provide any kind of documentation on implementing an API to get the keypoints from an image. Our aim is to able to deploy this API as an Azure Function and also know if it is feasible.
I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration
If you have questions about the projects I suggested,
-
Updated
Feb 20, 2020 - Python
-
Updated
Feb 20, 2020 - Jupyter Notebook
-
Updated
Feb 20, 2020
Transformer-XL reports scores on Penn, and WikiText-103, not WikiText-2.
The 54.52 score on WikiText-2 is incorrect and should be moved to the Penn table.
-
Updated
Feb 19, 2020
README upgrade
I recently added "back to top" button to README. What other features would make it easier to browse? Please write your recommendation.
-
Updated
Feb 20, 2020
tf.functionmakes invalid assumptions about arguments that areMappinginstances. In general, there are no requirements forMappinginstances to have constructors that accept[(key, value)]initializers, as assumed here.This leads to cryptic exceptions when used with perfectly valid
Mappings