Skip to content
#

machine-translation

Here are 491 public repositories matching this topic...

micheletufano
micheletufano commented Dec 11, 2017

From the code (input_pipeline.py) I can see that the ParallelTextInputPipeline automatically generates the SEQUENCE_START and SEQUENCE_END tokens (which means that the input text does not need to have those special tokens).

Does ParallelTextInputPipeline also perform **_padding

This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.

  • Updated Feb 29, 2020
  • Jupyter Notebook
varisd
varisd commented Aug 7, 2018

Based on this line of code:
https://github.com/ufal/neuralmonkey/blob/master/neuralmonkey/decoders/output_projection.py#L125

Current implementation isn't flexible enough; if we train a "submodel" (e.g. decoder without attention - not containing any ctx_tensors) we cannot use the trained variables to initialize model with attention defined because the size of the dense layer matrix input become

Improve this page

Add a description, image, and links to the machine-translation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the machine-translation topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.