-
Updated
May 28, 2020 - Python
openai
Here are 159 public repositories matching this topic...
It would be great to have instructions on how to train a language model from scratch - not just loading the paper's model.
-
Updated
Apr 23, 2020 - Python
Hi,
When we try to tokenize the following sentence:
If we use spacy
a = spacy.load('en_core_web_lg')
doc = a("I like the link http://www.idph.iowa.gov/ohds/oral-health-center/coordinator")
list(doc)
We got
[I, like, the, link, http://www.idph.iowa.gov, /, ohds, /, oral, -, health, -, center, /, coordinator]
But if we use the Spacy transformer tokenizer:
-
Updated
May 20, 2020 - Python
-
Updated
Jul 14, 2019 - Python
-
Updated
May 29, 2020 - Python
-
Updated
Feb 9, 2018 - Python
-
Updated
Mar 6, 2020 - Python
-
Updated
Aug 4, 2018 - Python
-
Updated
Feb 23, 2020 - Python
At the moment if the digital twin model does not have the plugins built, we can not communicate them them thus we get the following error,
Timeout communicating with flight control plugin.
It would be helpful if the error message included a message reminding the user to check to make sure the plugin is built, or better yet do the check automatically.
-
Updated
Dec 31, 2019 - Jupyter Notebook
-
Updated
May 16, 2020 - Python
-
Updated
Jul 24, 2019 - Python
-
Updated
May 28, 2020 - Python
-
Updated
Mar 10, 2018 - Python
Similarly as we do for other projects (example) we should add a CONTRIBUTING file. It would be very helpful for external contributors to understand how to propose new changes.
-
Updated
Jul 24, 2019 - Python
-
Updated
Sep 11, 2018 - C#
-
Updated
Mar 10, 2020 - C#
-
Updated
Apr 6, 2020 - Python
-
Updated
Mar 14, 2018 - Julia
-
Updated
Jan 28, 2020 - HTML
-
Updated
May 29, 2020 - Python
-
Updated
Jul 18, 2019 - C++
Improve this page
Add a description, image, and links to the openai topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the openai topic, visit your repo's landing page and select "manage topics."
https://github.com/minimaxir/gpt-2-simple/blob/ca6bc61d958fd4c474af9a412ace27279b88dd90/gpt_2_simple/src/encoder.py#L8
According to the docs the lru_cache decorator uses memozation to speed up a function call when it's called with the same arguments.
But this function doesn't need any arguments...