Skip to content
Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
Python
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore ignore .vscode Aug 30, 2018
LICENSE Initial commit Dec 2, 2017
README.md fix attention Dec 26, 2019
model.py fix attention Dec 26, 2019
train.py Update train.py Oct 18, 2018
utils.py smaller dataset Dec 7, 2017

README.md

mini seq2seq

Minimal Seq2Seq model with attention for neural machine translation in PyTorch.

This implementation focuses on the following features:

  • Modular structure to be used in other projects
  • Minimal code for readability
  • Full utilization of batches and GPU.

This implementation relies on torchtext to minimize dataset management and preprocessing parts.

Model description

Requirements

  • GPU & CUDA
  • Python3
  • PyTorch
  • torchtext
  • Spacy
  • numpy
  • Visdom (optional)

download tokenizers by doing so:

python -m spacy download de
python -m spacy download en

References

Based on the following implementations

You can’t perform that action at this time.