-
Updated
Apr 20, 2020 - Python
mt
Here are 27 public repositories matching this topic...
-
Updated
Oct 16, 2019
Documentation of MMT CLI is lacking a lot of small information. Things like:
- For this command to work:
./mmt import -e ENGINE -p SOURCE_FILE TARGET_FILE
SOURCE_FILE and TARGET_FILE needs to end to language IDs.
I tried importing files with .txt extension but the number of units in
curl -X GET http://localhost:8045/memories/imports/00000000-0000-0000-0000-000000000001
were 1/10th
This could be visualized in another table, where e.g. the confidence of the system across different documents could be compared and contrasted.
Document translation doesn't currently support the old Windows formats. Perhaps there's a quick Python plugin that we can use to perform a conversion on the fly back and forth. This would be best as an enable-able feature so that the library doesn't have to be installed
migrated from https://sourceforge.net/p/apertium/tickets/7/
-
Updated
Jul 4, 2018 - Jupyter Notebook
-
Updated
Jun 5, 2020 - Python
-
Updated
Jun 17, 2018 - Python
-
Updated
Jan 30, 2017 - Python
-
Updated
Aug 21, 2017 - Python
-
Updated
Apr 30, 2020 - Shell
Remove file suffixes
Distro packaging guidelines prohibit coding language suffixes on binaries, so we might as well remove them from the repo itself to make it consistent.
-
Updated
Oct 17, 2019 - Python
-
Updated
Jun 12, 2017 - Python
-
Updated
Jul 11, 2017 - Java
eladkarako / mt
-
Updated
Feb 23, 2018 - Python
Improve this page
Add a description, image, and links to the mt topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the mt topic, visit your repo's landing page and select "manage topics."
Based on this line of code:
https://github.com/ufal/neuralmonkey/blob/master/neuralmonkey/decoders/output_projection.py#L125
Current implementation isn't flexible enough; if we train a "submodel" (e.g. decoder without attention - not containing any ctx_tensors) we cannot use the trained variables to initialize model with attention defined because the size of the dense layer matrix input become