-
Updated
May 19, 2020 - Python
machine-translation
Here are 451 public repositories matching this topic...
From the code (input_pipeline.py) I can see that the ParallelTextInputPipeline automatically generates the SEQUENCE_START and SEQUENCE_END tokens (which means that the input text does not need to have those special tokens).
Does ParallelTextInputPipeline also perform **_padding
When positional encoding is disabled, the embedding scaling is also disabled even though the operations are independent:
https://github.com/OpenNMT/OpenNMT-py/blob/1.0.0/onmt/modules/embeddings.py#L48
In consequence, Transformer models with relative position representations do not follow the reference implementation which scales the embedding [by default](https://github.com/tensorflow/tensor
首先感谢楼主的分享,对于basic_seq2seq中的代码,运行时产生如下错误,我的tensorflow是最新的1.3.0 gpu版本,执行时所有代码均未改动,想楼主帮看看
Traceback (most recent call last):
File "D:\Workspaces\Eclipse\PythonLearn1\src\seq2seq_init_.py", line 227, in
num_layers)
File "D:\Workspaces\Eclipse\PythonLearn1\src\seq2seq_init_.py", line 189, in seq2seq_model
decoder_input)
File "D:\Workspaces\Eclipse\PythonLearn1\sr
-
Updated
Feb 19, 2020 - Lua
Documentation
Current documentation in README explains how to install the toolkit and how to run examples. However, I don't think this is enough for users who want to make some changes to the existing recipes or make their own new recipe. In that case, one needs to understand what run.sh does step by step, but I think docs for that are missing at the moment. It would be great if we provide documentation for:
-
Updated
May 20, 2020 - Python
- Add CI test for building documentations (Do not ignore
warningsand add spellcheck). - Fix docstrings with incorrect/inconsistent Sphinx format. Currently, such issues are treated as
warningsin the docs building.
-
Updated
May 3, 2020 - TeX
-
Updated
Feb 27, 2020 - Python
Hi,
Is it possible to add benchmarks of some models into documentation for comparison purposes ?
Also run time would be helpful. For example 1M iteration takes a weekend with GTX 1080.
fastText updated their embeddings. The link you have on the README no longer points to an existing page.
The alignment matrices you have were are binded to the embeddings of fastText that could still be obtained in version 0.2.0, 19 Dec 2018 .
Links to those embeddings can be found in the file [pretrain
-
Updated
May 20, 2020 - Python
It would be cool to add Google Translate support, because most likely Google Translate is better than Microsoft Translate.
And maybe it would be good idea to use Google Apps Script, because it has a free unlimited Google Translate. Here is an [example article](https://techstreams.github.io/2016/01/07/translation-automation-with-wor
-
Updated
Dec 20, 2019 - HTML
-
Updated
May 20, 2020 - TeX
-
Updated
Mar 1, 2020 - Python
-
Updated
Apr 20, 2020 - Python
-
Updated
Feb 10, 2020 - Python
-
TransformerDecoder.forward: where doesself.trainingcome from?
https://github.com/asyml/texar-pytorch/blob/d17d502b50da1d95cb70435ed21c6603370ce76d/texar/torch/modules/decoders/transformer_decoders.py#L448-L449 -
All arguments should say their types explicitly in the docstring. E.g., what is the type of
infer_mode? The [method signature](https://texar-pytorch.readthedocs.
-
Updated
Mar 7, 2020 - Python
-
Updated
May 12, 2020 - Python
-
Updated
May 14, 2020 - Python
Based on this line of code:
https://github.com/ufal/neuralmonkey/blob/master/neuralmonkey/decoders/output_projection.py#L125
Current implementation isn't flexible enough; if we train a "submodel" (e.g. decoder without attention - not containing any ctx_tensors) we cannot use the trained variables to initialize model with attention defined because the size of the dense layer matrix input become
-
Updated
Oct 16, 2019
-
Updated
Aug 23, 2017 - Python
embedding dropout
Make clear in documentation and small.yaml that encoder (and decoder?) embedding section takes a separate dropout argument and defaults to encoder dropout argument if missing.
Will send a pull request myself eventually and just document all issues I find along the way of setting joey up for me.
-
Updated
May 5, 2020 - Python
-
Updated
May 4, 2020 - Python
Improve this page
Add a description, image, and links to the machine-translation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the machine-translation topic, visit your repo's landing page and select "manage topics."
Description
I am wondering when Assessing the Factual Accuracy of Generated Text in https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikifact will be publically available since it's already been 6 months. @bengoodrich