Skip to content
Data loaders and abstractions for text and NLP
Python Shell Other
Branch: master
Clone or download

Latest commit

mthrok [BC breaking] Add Sentencepiece torchscript Extension (#755)
This CC adds `torchscript` extension `_torchtext.so`, which contains simple interface to `SentencePiece`.

 - SentencePiece `v0.1.86` is used.
 - `libsentencepiece.a` is built right before `_torchtext.so` is compiled.
   The logic for triggering this build from `setuptools` can be found under `build_tools/setup_helpers`.
 - `_torchtext.so` provides interface to train a SentencePiece model and load a model from file.

Breaking change:

Previously `torchtext.data.functional.load_sp_model` returned `sentencepiece.SentencePieceProcessor` object, which supported the following methods, in addition to `__len__` and `__getitem__`

```
$ grep '$self->' third_party/sentencepiece/python/sentencepiece.i
    return $self->Load(filename);
    return $self->LoadFromSerializedProto(filename);
    return $self->SetEncodeExtraOptions(extra_option);
    return $self->SetDecodeExtraOptions(extra_option);
    return $self->SetVocabulary(valid_vocab);
    return $self->ResetVocabulary();
    return $self->LoadVocabulary(filename, threshold);
    return $self->EncodeAsPieces(input);
    return $self->EncodeAsIds(input);
    return $self->NBestEncodeAsPieces(input, nbest_size);
    return $self->NBestEncodeAsIds(input, nbest_size);
    return $self->SampleEncodeAsPieces(input, nbest_size, alpha);
    return $self->SampleEncodeAsIds(input, nbest_size, alpha);
    return $self->DecodePieces(input);
    return $self->DecodeIds(input);
    return $self->EncodeAsSerializedProto(input);
    return $self->SampleEncodeAsSerializedProto(input, nbest_size, alpha);
    return $self->NBestEncodeAsSerializedProto(input, nbest_size);
    return $self->DecodePiecesAsSerializedProto(pieces);
    return $self->DecodeIdsAsSerializedProto(ids);
    return $self->GetPieceSize();
    return $self->PieceToId(piece);
    return $self->IdToPiece(id);
    return $self->GetScore(id);
    return $self->IsUnused(id);
    return $self->IsControl(id);
    return $self->IsUnused(id);
    return $self->GetPieceSize();
    return $self->PieceToId(key);
```

The new C++ Extension provides the following methods

```
Encode(input)
EncodeAsIds(input)
EncodeAsPieces(input)
```
Latest commit 5de163a May 8, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
.github/ISSUE_TEMPLATE Issue templates (#574) Jul 29, 2019
build_tools [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
docs Text classification datasets with new torchtext dataset abstraction (#… Apr 21, 2020
examples Text classification datasets with new torchtext dataset abstraction (#… Apr 21, 2020
test [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
third_party [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
torchtext [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
.flake8 [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
.gitignore Refactor text_classification, improve extract_archive and add a small… Jul 26, 2019
.gitmodules [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020
.python3 20200501 fbsync (#750) May 4, 2020
.travis.yml [CCI migration] Disable travis tests except for RUN_FLAKE8 (#747) Apr 29, 2020
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md (#702) Feb 29, 2020
CONTRIBUTING.md OSS Automated Fix: Addition of Contributing (#637) Nov 9, 2019
LICENSE first draft of most of text.data Dec 15, 2016
README.rst Drop support for EOL Python 2 (#732) Apr 24, 2020
codecov.yml Ignore tests directory in codecov statistics (#103) Sep 4, 2017
pytest.ini Add pytest.ini Jun 27, 2017
readthedocs.yml Edit py version in rtd yaml Sep 25, 2018
requirements.txt Drop support for EOL Python 2 (#732) Apr 24, 2020
setup.py [BC breaking] Add Sentencepiece torchscript Extension (#755) May 8, 2020

README.rst

https://travis-ci.org/pytorch/text.svg?branch=master https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchtext%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v

torchtext

This repository consists of:

  • torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vectors)
  • torchtext.datasets: Pre-built loaders for common NLP datasets

Note: we are currently re-designing the torchtext library to make it more compatible with pytorch (e.g. torch.utils.data). Several datasets have been written with the new abstractions in torchtext.experimental folder. We also created an issue to discuss the new abstraction, and users are welcome to leave feedback link.

Installation

Make sure you have Python 3.5+ and PyTorch 0.4.0 or newer. You can then install torchtext using pip:

pip install torchtext

For PyTorch versions before 0.4.0, please use pip install torchtext==0.2.3.

Or you can install torchtext using conda:

conda install -c pytorch -c powerai torchtext sentencepiece

Optional requirements

If you want to use English tokenizer from SpaCy, you need to install SpaCy and download its English model:

pip install spacy
python -m spacy download en

Alternatively, you might want to use the Moses tokenizer port in SacreMoses (split from NLTK). You have to install SacreMoses:

pip install sacremoses

Documentation

Find the documentation here.

Data

The data module provides the following:

  • Ability to describe declaratively how to load a custom NLP dataset that's in a "normal" format:

    >>> pos = data.TabularDataset(
    ...    path='data/pos/pos_wsj_train.tsv', format='tsv',
    ...    fields=[('text', data.Field()),
    ...            ('labels', data.Field())])
    ...
    >>> sentiment = data.TabularDataset(
    ...    path='data/sentiment/train.json', format='json',
    ...    fields={'sentence_tokenized': ('text', data.Field(sequential=True)),
    ...            'sentiment_gold': ('labels', data.Field(sequential=False))})
  • Ability to define a preprocessing pipeline:

    >>> src = data.Field(tokenize=my_custom_tokenizer)
    >>> trg = data.Field(tokenize=my_custom_tokenizer)
    >>> mt_train = datasets.TranslationDataset(
    ...     path='data/mt/wmt16-ende.train', exts=('.en', '.de'),
    ...     fields=(src, trg))
  • Batching, padding, and numericalizing (including building a vocabulary object):

    >>> # continuing from above
    >>> mt_dev = datasets.TranslationDataset(
    ...     path='data/mt/newstest2014', exts=('.en', '.de'),
    ...     fields=(src, trg))
    >>> src.build_vocab(mt_train, max_size=80000)
    >>> trg.build_vocab(mt_train, max_size=40000)
    >>> # mt_dev shares the fields, so it shares their vocab objects
    >>>
    >>> train_iter = data.BucketIterator(
    ...     dataset=mt_train, batch_size=32,
    ...     sort_key=lambda x: data.interleave_keys(len(x.src), len(x.trg)))
    >>> # usage
    >>> next(iter(train_iter))
    <data.Batch(batch_size=32, src=[LongTensor (32, 25)], trg=[LongTensor (32, 28)])>
  • Wrapper for dataset splits (train, validation, test):

    >>> TEXT = data.Field()
    >>> LABELS = data.Field()
    >>>
    >>> train, val, test = data.TabularDataset.splits(
    ...     path='/data/pos_wsj/pos_wsj', train='_train.tsv',
    ...     validation='_dev.tsv', test='_test.tsv', format='tsv',
    ...     fields=[('text', TEXT), ('labels', LABELS)])
    >>>
    >>> train_iter, val_iter, test_iter = data.BucketIterator.splits(
    ...     (train, val, test), batch_sizes=(16, 256, 256),
    >>>     sort_key=lambda x: len(x.text), device=0)
    >>>
    >>> TEXT.build_vocab(train)
    >>> LABELS.build_vocab(train)

Datasets

The datasets module currently contains:

  • Sentiment analysis: SST and IMDb
  • Question classification: TREC
  • Entailment: SNLI, MultiNLI
  • Language modeling: abstract class + WikiText-2, WikiText103, PennTreebank
  • Machine translation: abstract class + Multi30k, IWSLT, WMT14
  • Sequence tagging (e.g. POS/NER): abstract class + UDPOS, CoNLL2000Chunking
  • Question answering: 20 QA bAbI tasks
  • Text classification: AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull

Others are planned or a work in progress:

  • Question answering: SQuAD

See the test directory for examples of dataset usage.

Experimental Code

We have re-written several datasets under torchtext.experimental.datasets:

  • Sentiment analysis: IMDb
  • Language modeling: abstract class + WikiText-2, WikiText103, PennTreebank

A new pattern is introduced in Release v0.5.0. Several other datasets are also in the new pattern:

  • Unsupervised learning dataset: Enwik9
  • Text classification: AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

You can’t perform that action at this time.