Deep learning
Deep learning is an AI function and subset of machine learning, used for processing large amounts of complex data.
Here are 22,497 public repositories matching this topic...
Please implement the latest examples in the examples/ folder to the docs/mkdocs.yml to see theses examples on the homepage. Perhaps a subsectioning for that many examples is necessary. See Homepage
Thank you!
a=torch.tensor([3+4j])
torch.clamp(a, 2+3j, 1+5j)
tensor([(3.0000 + 4.0000j)], dtype=torch.complex128)
torch.clamp(a, 1+5j, 2)
tensor([(1.0000 + 5.0000j)], dtype=torch.complex128)
np.clip(a.numpy(), 1+5j, 2)
array([2.+0.j])
trainable_variables = weights.values() + biases.values() doesn't work.
Also if I write trainable_variables = list(weights.values()) + list(biases.values()), I have to turn on tf.enable_eager_execution(), but the training result is wrong, accuracy is ar
-
Updated
Mar 16, 2020 - Python
Target Leakage in mentioned steps in Data Preprocessing. Train/test split needs to be before missing value imputation. Else you will have a bias in test/eval/serve.
While opening the README.md, download.py gave me this:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 33476: character maps to <undefined>
In case someone has the same problem, you may fix it by changing line 73 to:
with open('README.md', encoding="utf8") as readme:
-
Updated
Mar 19, 2020 - Jupyter Notebook
-
Updated
Feb 12, 2020
This should really help to keep a track of papers read so far. I would love to fork the repo and keep on checking the boxes in my local fork.
For example: Have a look at this section. People fork this repo and check the boxes as they finish reading each section.
Considering the MNIST dataset, wich has 5923 instances of the 0 class in the training set, I'm alittle confused about the following code for detemining the relative errors of the SGD classification model:
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
(https://github.com/ageron/handson-ml/blob/master/03_classification.ipynb // In: 67)
Since using `axi
Alexnet implementation in tensorflow has incomplete architecture where 2 convolution neural layers are missing. This issue is in reference to the python notebook mentioned below.
-
Updated
Mar 14, 2020 - Jupyter Notebook
@microsoft AI Team - Fantastic Product! Thank You!
PLEASE: Better documentation on Source Code and Fields, Properties, Methods, and Constructors, just a detailed Summary, please in the C# projects.
When coding, the IntelliSense documentation is very handy! I would really appreciate more detailed documentation.
An example: PreviousMinibatchEvaluationAverage - I have no idea what its ac
-
Updated
Mar 19, 2020 - Python
-
Updated
Jan 22, 2020 - C++
I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration
If you have questions about the projects I suggested,
-
Updated
Mar 4, 2020 - Python
-
Updated
Jan 29, 2020 - Python
-
Updated
Mar 12, 2020 - Python
-
Updated
Feb 18, 2020 - Jupyter Notebook
-
Updated
Mar 4, 2020
-
Updated
Jun 12, 2017
-
Updated
Mar 17, 2020 - Python
This is starting to become a frequent question. If the scorer has not been downloaded via git lfs, the error message ValueError: Scorer initialization failed with error code 1 will appear. This is not a very helpful error message and I feel that a better message that actually describes the problem and how to solve it would be better and prevent this question being asked so many times.
This al
-
Updated
Oct 19, 2019
-
Updated
Nov 30, 2019 - Lua
-
Updated
Mar 4, 2020
- Wikipedia
- Wikipedia
tf.functionmakes invalid assumptions about arguments that areMappinginstances. In general, there are no requirements forMappinginstances to have constructors that accept[(key, value)]initializers, as assumed here.This leads to cryptic exceptions when used with perfectly valid
Mappings