Skip to content
main
Switch branches/tags
Code

Latest commit

Summary:
Implements beamable encoder-decoder cross attention. This removes the need to duplicate the encoder states beam_size # of times during inference. Which gives both a big memory improvement enabling larger batch sizes while on GPU and also compute efficiency by greatly reducing time spent in reorder_encoder_out.

This is inspired from work in [fastseq](https://arxiv.org/abs/2106.04718) which has more in-depth analysis.

There was an old [PR](#1958) for fairseq as well to implement this feature but was not merged and eventually closed. I revive+refactor that PR and also add support for dynamically changing the beam_size while calling `hub_interface.generate()`

## Benchmarking

**CPU Performance** (On-demand devserver)
batch size: 1 | beam size: 4
50.4s/it -> 22.3s/it | **2.25X Speedup**

batch size: 2 | beam size: 4
53.1s/it -> 25.8s/it | **2.06X Speedup**

batch size: 1 | beam size: 8
65.8s/it -> 23.8s/it | **2.76X Speedup**

**GPU Performance**

Reported in detail [here](#1957)

Currently this optimization is only enabled for our custom BART model used in the workplace summarization demo to unblock landing this fast.
This should be up-streamed to TransformerModel after syncing with fairseq folk.

Reviewed By: xwhan

Differential Revision: D35722467

fbshipit-source-id: a420f73ff5b9ec0cdf40c59464b6ed1794114906
806855b

Git stats

Files

Permalink
Failed to load latest commit information.



Support Ukraine MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}