Skip to content
#

pytorch

Here are 13,695 public repositories matching this topic...

transformers
patrickvonplaten
patrickvonplaten commented Dec 11, 2020

🚀 Feature request

Bart is a seq2seq model, but there might be applications where one would like to use only the pre-trained BartDecoder in an EncoderDecoder setting with a "long" encoder, such as

from transformers import EncoderDecoderModel

model = EncoderDecoderModel("allenai/longformer-large-4096", "facebook/bart-large")

# fine-tune model ...

This is already p

hellock
hellock commented Jun 7, 2020

We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.

You can either:

  1. Suggest a new feature by leaving a comment.
  2. Vote for a feature request with 👍 or be against with 👎. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
  3. Tell us that
pytorch-lightning
juneskiafc
juneskiafc commented Dec 27, 2020

🚀 Feature

A way to print to terminal without breaking up the progress bar.

Motivation

A lot of people print stuff to terminal while training/validating/testing, and currently a simple call to print() will break the progress bar. A way to get around this is to set up a custom progress bar, with methods for calling tqdm.write, and passing that as a callback to the trainer. However, this

JonTriebenbach
JonTriebenbach commented Sep 2, 2020

Bug Report

These tests were run on s390x. s390x is big-endian architecture.

Failure log for helper_test.py

________________________________________________ TestHelperTensorFunctions.test_make_tensor ________________________________________________

self = <helper_test.TestHelperTensorFunctions testMethod=test_make_tensor>

    def test_make_tensor(self):  # type: () -> None
    
nni
Bringing-Old-Photos-Back-to-Life
bpops
bpops commented Sep 19, 2020

Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.

File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enable

Improve this page

Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.