natural-language-processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 8,762 public repositories matching this topic...
-
Updated
Sep 11, 2021 - Python
-
Updated
Oct 23, 2021 - Jupyter Notebook
-
Updated
Nov 13, 2021 - Python
-
Updated
Nov 10, 2021 - Python
-
Updated
Nov 13, 2021 - Python
-
Updated
Nov 5, 2021 - Python
-
Updated
Nov 8, 2021
-
Updated
Jun 12, 2017
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
-
Updated
Nov 13, 2021 - Python
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi-
Updated
Nov 8, 2021
-
Updated
May 2, 2021
-
Updated
Nov 12, 2021 - Python
f-strings offer better readability/performance than str.format and %, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
NOTE FOR CONTRIBUTORS: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under
datasets/*.
-
Updated
Nov 13, 2021 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
Hello!
I'd like to discuss a potential enhancements of NLTKWordTokenizer and TreebankWordTokenizer. For those unaware, the former is the tokenizer that is most frequently used, and is used in the word_tokenize function. It's also based on the latter class: TreebankWordTokenizer.
An example usage as can be found in the documentation:
>>> from nltk.tokenize import NLTKWor-
Updated
Dec 22, 2020 - Python
-
Updated
Jul 1, 2021 - Python
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Oct 19, 2021 - HTML
Add T9 decoder
Hey Hackers of this spoopy month!
Welcome to the Ciphey repo(s)!
This issue requires you to add a decoder.
This wiki section walks you through EVERYTHING you need to know, and we've added some more links at the bottom of this issue to detail more about the decoder.
https://github.com/Ciphey/Ciphey/wiki#adding-your-own-crackers--decoders
-
Updated
Nov 10, 2021 - Java
-
Updated
Jul 9, 2021 - Python
-
Updated
Nov 13, 2021 - Python
-
Updated
Oct 1, 2021 - Python
-
Updated
Nov 12, 2021 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
Environment info
transformersversion: 4.11.2