Skip to content
#

xlnet

Here are 66 public repositories matching this topic...

transformers
patrickvonplaten
patrickvonplaten commented Dec 11, 2020

🚀 Feature request

Bart is a seq2seq model, but there might be applications where one would like to use only the pre-trained BartDecoder in an EncoderDecoder setting with a "long" encoder, such as

from transformers import EncoderDecoderModel

model = EncoderDecoderModel("allenai/longformer-large-4096", "facebook/bart-large")

# fine-tune model ...

This is already p

自然语言处理(nlp),小姜机器人(闲聊检索式chatbot),BERT句向量-相似度(Sentence Similarity),XLNET句向量-相似度(text xlnet embedding),文本分类(Text classification), 实体提取(ner,bert+bilstm+crf),数据增强(text augment, data enhance),同义句同义词生成,句子主干提取(mainpart),中文汉语短文本相似度,文本特征工程,keras-http-service调用

  • Updated Nov 28, 2020
  • Python

中文长文本分类、短句子分类、多标签分类、两句子相似度(Chinese Text Classification of Keras NLP, multi-label classify, or sentence classify, long or short),字词句向量嵌入层(embeddings)和网络层(graph)构建基类,FastText,TextCNN,CharCNN,TextRNN, RCNN, DCNN, DPCNN, VDCNN, CRNN, Bert, Xlnet, Albert, Attention, DeepMoji, HAN, 胶囊网络-CapsuleNet, Transformer-encode, Seq2seq, SWEM, LEAM, TextGCN

  • Updated Dec 19, 2020
  • Python
EricFillion
EricFillion commented Jan 9, 2020

All other language models do not perform as well as HappyROBERTA large for masked word prediction. We should encourage users to use HappyROBERTA Large by displaying a logger message if they use a suboptimal language model. This message will encourage them to use HappyROBERTA Large.

There are still some situations where a user may want to use another model, so we will keep them available.

Improve this page

Add a description, image, and links to the xlnet topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the xlnet topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.