Machine learning
Machine learning is the practice of teaching a computer to learn. The concept uses pattern recognition, as well as other forms of predictive algorithms, to make judgments on incoming data. This field is closely related to artificial intelligence and computational statistics.
Here are 40,325 public repositories matching this topic...
Steps:
- Go to a nonexistent page in Keras docs, e.g. https://keras.io/asdfsada so the 404 page is shown
- Search something
- You are redirected to https://search.html/?q=something instead of a relative URL
Describe the bug
Calling a pipeline with a nonparametric function causes an error since the function transform() is missing. The pipeline itself calls the function fit_transform() if it's present. For nonparametric functions (the most prominent being t-SNE) a regular transform() method does not exist since there is no projection or mapping that is learned. It could still be used f
🐛 Bug
To Reproduce
Run following from jupyter lab console
import torch
foo = torch.arange(5)
foo.as_strided((5,), (-1,), storage_offset=4)Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/daniil/.local/lib/python3.6/site-packages/torch/tensor.py", line 159, i
trainable_variables = weights.values() + biases.values() doesn't work.
Also if I write trainable_variables = list(weights.values()) + list(biases.values()), I have to turn on tf.enable_eager_execution(), but the training result is wrong, accuracy is ar
wiki getting spammed
Please see https://github.com/tesseract-ocr/tesseract/wiki/4.0-with-LSTM/_history
The content on 4.0-with-LSTM page has been deleted.
There should be some control or moderation of changes to wiki.
I would also suggest using https://github.com/apps/stale bot to prune out inactive\old issues, though number of days of inactivity can be larger than default of 60.
-
Updated
Mar 26, 2020 - Python
Target Leakage in mentioned steps in Data Preprocessing. Train/test split needs to be before missing value imputation. Else you will have a bias in test/eval/serve.
From the test in #34944:
julia> REPLTests.fake_repl() do in, out, repl
repltask = @async begin
REPL.run_repl(repl)
end
write(in, "?;\n")
write(in, '\x04')
wait(repltask)
end
search:
-
Updated
Mar 25, 2020 - Jupyter Notebook
-
Updated
Feb 12, 2020
This should really help to keep a track of papers read so far. I would love to fork the repo and keep on checking the boxes in my local fork.
For example: Have a look at this section. People fork this repo and check the boxes as they finish reading each section.
Considering the MNIST dataset, wich has 5923 instances of the 0 class in the training set, I'm alittle confused about the following code for detemining the relative errors of the SGD classification model:
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
(https://github.com/ageron/handson-ml/blob/master/03_classification.ipynb // In: 67)
Since using `axi
I'm not sure if XGBoost s model is well calibrated with softmax. It would be nice to have a doc with various experiments including random forest, dart etc.
Alexnet implementation in tensorflow has incomplete architecture where 2 convolution neural layers are missing. This issue is in reference to the python notebook mentioned below.
-
Updated
Mar 25, 2020 - Jupyter Notebook
What's the ETA for updating the massively outdated documentation?
Please update all documents that are related building CNTK from source with latest CUDA dependencies that are indicated in CNTK.Common.props and CNTK.Cpp.props.
I tried to build from source, but it's a futile effort.
-
Updated
Mar 22, 2020 - Python
-
Updated
Jan 22, 2020 - C++
Hi,
First of all, thank you so much for a great library!
When there are overlapping matches in a Doc, EntityRuler prioritize longer patterns over shorter, and if equal in length the match occuring first in the Doc is chosen.
This is great, but it would be good if this behavior is explicitly stated in the documentation for clarity. Could be stated under init or call.
Which
-
Updated
Jan 29, 2020 - Python
fun: 0.31677973007087407
jac: array([ -4.85498059e-05, 1.52435475e-09, -2.42237461e-08, ...,
5.36244893e-05, 5.82601644e-05, 7.84943560e-05])
message: 'Max. number of function evaluations reached'
nfev: 400
nit: 28
status: 3
success: False
x: array([ 0.00000000e+00, 7.62177373e-06, -1.21118731e-04, ...,
-1.12048523e+00, -1.0043010
-
Updated
Feb 18, 2020 - Jupyter Notebook
-
Updated
Mar 21, 2020 - Python
-
Updated
Mar 20, 2020
-
Updated
Jun 12, 2017
-
Updated
Mar 2, 2020
- Wikipedia
- Wikipedia
Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template
System information