language-model
Here are 575 public repositories matching this topic...
-
Updated
Dec 1, 2019
-
Updated
Sep 17, 2020 - Rust
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
-
Updated
Sep 17, 2020 - Python
-
Updated
Jul 28, 2020 - Python
-
Updated
Oct 7, 2019 - Python
-
Updated
May 7, 2020 - Python
-
Updated
Aug 4, 2020 - Python
-
Updated
Sep 14, 2020
-
Updated
Sep 17, 2020 - Scala
-
Updated
Sep 9, 2020 - Python
-
Updated
Jul 15, 2020 - Python
-
Updated
Feb 7, 2019 - Python
-
Updated
Aug 5, 2020
-
Updated
Jan 1, 2019 - Python
-
Updated
Sep 17, 2020 - Python
Question
Hi, I have been experimenting with the QA capabilities of Haystack and so far. I was wondering if it was possible for the model to generate paragraph-like contexts.
Additional context
So far, when a question is asked, the model outputs an answer and the context the answer can be found in. The context output by the model is oftentimes fragments of a sentence or fragments of a para
-
Updated
Jun 20, 2019 - Python
-
Updated
Jan 10, 2020 - Python
-
Updated
Sep 16, 2020 - Go
-
Updated
Dec 18, 2017 - Python
-
Updated
Sep 3, 2020 - Python
-
Updated
Aug 20, 2020 - TeX
-
Updated
Sep 4, 2020 - Python
-
Updated
Nov 15, 2018 - Jupyter Notebook
-
Updated
Sep 6, 2020 - Jupyter Notebook
-
Updated
Jan 9, 2020 - Python
Improve this page
Add a description, image, and links to the language-model topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the language-model topic, visit your repo's landing page and select "manage topics."
Currently we have a mixture of negative and positive formulated arguments, e.g.
no_cudaandtraininghere: https://github.com/huggingface/transformers/blob/0054a48cdd64e7309184a64b399ab2c58d75d4e5/src/transformers/benchmark/benchmark_args_utils.py#L61.We should change all arguments to be positively formulated, *e.g. from
no_cudatocuda. These arguments should