natural-language-processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 10,379 public repositories matching this topic...
-
Updated
Aug 10, 2022 - Python
-
Updated
Aug 7, 2022 - Python
-
Updated
Aug 1, 2022 - Jupyter Notebook
-
Updated
Aug 7, 2022 - Python
-
Updated
Aug 10, 2022 - Python
-
Updated
Aug 10, 2022
-
Updated
Aug 1, 2022 - Python
-
Updated
Jun 12, 2017
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
-
Updated
Aug 10, 2022 - Python
-
Updated
Aug 10, 2022 - Python
Adding a Dataset
- Name: Stanford dog dataset
- Description: The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/
- Paper: http://vision.stanford.edu/aditya86/ImageNetDogs/
- Data: *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/Ima
-
Updated
Aug 10, 2022
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi-
Updated
Jul 30, 2022
-
Updated
Aug 10, 2022 - Python
-
Updated
Jul 25, 2021 - Jupyter Notebook
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
Checking the Python files in NLTK with "python -m doctest" reveals that many tests are failing. In many cases, the failures are just cosmetic discrepancies between the expected and the actual output, such as missing a blank line, or unescaped linebreaks. Other cases may be real bugs.
If these failures could be avoided, it would become possible to improve CI by running "python -m doctest" each t
Add T9 decoder
Hey Hackers of this spoopy month!
Welcome to the Ciphey repo(s)!
This issue requires you to add a decoder.
This wiki section walks you through EVERYTHING you need to know, and we've added some more links at the bottom of this issue to detail more about the decoder.
https://github.com/Ciphey/Ciphey/wiki#adding-your-own-crackers--decoders
-
Updated
Apr 10, 2022 - HTML
-
Updated
Dec 22, 2020 - Python
-
Updated
Aug 9, 2022 - Java
-
Updated
Aug 11, 2022 - Python
-
Updated
Jul 1, 2022 - Python
-
Updated
Aug 10, 2022 - Java
Some ideas for figures to add to the PPT
- Linear regression, single-layer neural network
- Multilayer Perceptron with hidden layer
- Backpropagation
- Batch Normalization and alternatives
- Computational Graphs
- Dropout
- CNN - padding, stride, pooling,...
- LeNet
- AlexNet
- VGG
- GoogleNet
- ResNet
- DenseNet
- Memory Net
Created by Alan Turing
- Wikipedia
- Wikipedia
Feature request
We currently have 2 monocular depth estimation models in the library, namely DPT and GLPN.
It would be great to have a pipeline for this task, with the following API: