-
Updated
Dec 7, 2020 - Python
nlu
Here are 361 public repositories matching this topic...
-
Updated
Dec 7, 2020 - TypeScript
-
Updated
Oct 8, 2020 - Python
-
Updated
Dec 7, 2020 - JavaScript
-
Updated
Feb 8, 2020 - Python
-
Updated
Dec 6, 2020 - Python
-
Updated
Nov 6, 2020 - Python
-
Updated
Dec 4, 2020 - Python
-
Updated
May 30, 2019 - Clojure
-
Updated
Oct 19, 2020 - C#
-
Updated
Sep 5, 2020 - JavaScript
-
Updated
Nov 22, 2020
-
Updated
Oct 5, 2017 - OpenEdge ABL
-
Updated
Apr 4, 2020 - TypeScript
MyST is a sphinx markdown parser which provides more features than recommonmark. The key ones being native support for admonitions, directives and references without having to use eval_rst.
Note: When using rst directives which generate rst output (autodoc etc) you will still need to use eval_rst in MyST so that the output of the di
-
Updated
Dec 6, 2020 - C#
-
Updated
Jun 29, 2020 - Python
-
Updated
Feb 20, 2019 - TypeScript
-
Updated
Sep 3, 2020 - Java
-
Updated
Oct 5, 2019 - Python
-
Updated
Jul 5, 2018 - Python
-
Updated
Jan 5, 2018 - Jupyter Notebook
-
Updated
Jun 6, 2018 - Python
-
Updated
Nov 21, 2020
-
Updated
Dec 20, 2019
-
Updated
Nov 13, 2020 - Python
Improve this page
Add a description, image, and links to the nlu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the nlu topic, visit your repo's landing page and select "manage topics."
Description
While using tokenizers.create with the model and vocab file for a custom corpus, the code throws an error and is not able to generate the BERT vocab file
Error Message
ValueError: Mismatch vocabulary! All special tokens specified must be control tokens in the sentencepiece vocabulary.
To Reproduce
from gluonnlp.data import tokenizers
tokenizers.create('spm', model_p