bert
Here are 716 public repositories matching this topic...
Prerequisites
Please fill in by replacing
[ ]with[x].
- Are you running the latest
bert-as-service? - Did you follow the installation and the usage instructions in
README.md? - Did you check the [FAQ list in
README.md](https://github.com/hanxiao/bert-as-se
Excuse me, https://github.com/graykode/nlp-tutorial/blob/master/1-1.NNLM/NNLM-Torch.py#L50 The comment here may be wrong. It should be X = X.view(-1, n_step * m) # [batch_size, n_step * m]
Sorry for disturbing you.
-
Updated
Dec 1, 2019
PositionalEmbedding
The position embedding in the BERT is not the same as in the transformer. Why not use the form in bert?
-
Updated
May 15, 2020 - Python
-
Updated
May 18, 2020 - Python
Spacy has customizable word level tokenizers with rules for multiple languages. I think porting that to rust would add nicely to this package. Having a customizable uniform word level tokenization across platforms (client web, server) and languages would be beneficial. Currently, idk any clean way or whether it's even possible to write bindings for spacy cython.
Spacy Tokenizer Code
https:
Rust documentation
readme文件的代码关键字
近期在看模型的时候,因为README.md文件里涉及到了代码,但是markdown文件里代码的变量为Python的关键字str,如下所示,
import time
from bert_base.client import BertClient
with BertClient(show_server_config=False, check_version=False, check_length=False, mode='NER') as bc:
start_t = time.perf_counter()
str = '1月24日,新华社对外发布了中央对雄安新区的指导意见,洋洋洒洒1.2万多字,17次提到北京,4次提到天津,信息量很大,其实也回答了人们关心的很多问题。'
rst = bc.encode([str, str])
pri
-
Updated
Apr 20, 2020 - Python
Looks like spacy 2.1 --> 2.2 has changed the way lemmatizer objects are built. See stack-overflow answer for details.
I can update the library to account for this migration. I have a fork that I can create a pull request from. Let me know.
Steps to reproduce the behavior:
Run
"fr
- Add CI test for building documentations (Do not ignore
warningsand add spellcheck). - Fix docstrings with incorrect/inconsistent Sphinx format. Currently, such issues are treated as
warningsin the docs building.
-
Updated
Jan 28, 2020 - Jupyter Notebook
-
Updated
Jan 31, 2020 - Python
-
Updated
May 9, 2020 - Jupyter Notebook
-
Updated
Nov 18, 2019 - Python
-
Updated
Oct 23, 2019
-
Updated
May 10, 2020 - Erlang
On home page of website: https://nlp.johnsnowlabs.com/ I read "Full Python, Scala, and Java support"
Unfortunately it's 3 days now I'm trying to use Spark NLP in Java without any success.
- I cannot find Java API (JavaDoc) of the framework.
- not event a single example in Java is available
- I do not know Scala, I do not know how to convert things like:
val testData = spark.createDataFrame(
-
Updated
May 8, 2020 - Python
-
Updated
Mar 1, 2020 - Python
-
Updated
May 16, 2020 - Jupyter Notebook
-
Updated
Oct 12, 2019 - Python
-
Updated
May 13, 2020 - Python
-
Updated
Apr 24, 2020 - Python
Currently, we are not logging macro/micro averages on Tensorboard since it was appearing strangely in the interface (picture below), so it was removed.
Add macro/micro to Tensorboard.
When the code is doing the evaluation, there is an error when returning the evaluation result : result = estimator.evaluate(input_fn=eval_input_fn). Detailed error is probably related to the confusion matrix.
It says that: TypeError: eval_metric_ops[confusion_matrix] must be Operation or Tensor, given: <tf.Variable 'total_confusion_matrix:0' shape=(12, 12) dtype=float64_ref>
my tensorflo
If I want to use both of them, how to modify code in aen.py? Thanks a lot.
-
Updated
Apr 3, 2020 - Python
Hey! I think it would be useful to have a more detailed explanation about:
- what the dataset should look like for performing NER, similar to the fine-tuning example. The [NER sample](https://github.com/deepset-ai/FARM/blob/97b0211a37ea7c7d64b4602f0e21b65428b2bd76/t
Improve this page
Add a description, image, and links to the bert topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the bert topic, visit your repo's landing page and select "manage topics."

Tagging this as a Good First issue if anyone's interested.