pytorch
Here are 14,954 public repositories matching this topic...
-
Updated
Mar 12, 2021 - Jupyter Notebook
-
Updated
Mar 25, 2021 - Python
-
Updated
Mar 27, 2021 - Jupyter Notebook
-
Updated
Mar 10, 2021 - Python
-
Updated
Mar 27, 2021 - Python
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
- Suggest a new feature by leaving a comment.
- Vote for a feature request with
👍 or be against with👎 . (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!) - Tell us that
-
Updated
Mar 18, 2021 - Jupyter Notebook
-
Updated
Mar 28, 2021 - JavaScript
🐛 Bug
Please reproduce using the BoringModel
Currently, the progress bar doesn't make distinction between running in a real terminal or being piped to logs.
When Lightning run in logs, it produces a prog_bar line for each change.
<img width="1659" alt="Screenshot 2021-03-23 at 09 10 40" src="https://user-images.git
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
-
Updated
Mar 24, 2021
-
Updated
Mar 27, 2021 - Python
-
Updated
Mar 11, 2021 - Jupyter Notebook
-
Updated
Mar 28, 2021 - C++
-
Updated
Mar 21, 2021 - Python
-
Updated
Mar 27, 2021 - Python
-
Updated
Mar 27, 2021 - Python
-
Updated
Mar 28, 2021 - Python
Bug Report
These tests were run on s390x. s390x is big-endian architecture.
Failure log for helper_test.py
________________________________________________ TestHelperTensorFunctions.test_make_tensor ________________________________________________
self = <helper_test.TestHelperTensorFunctions testMethod=test_make_tensor>
def test_make_tensor(self): # type: () -> None
While setting train_parameters to False very often we also may consider disabling dropout/batchnorm, in other words, to run the pretrained model in eval mode.
We've done a little modification to PretrainedTransformerEmbedder that allows providing whether the token embedder should be forced to eval mode during the training phase.
Do you this feature might be handy? Should I open a PR?
-
Updated
Mar 28, 2021 - Python
I'm using mxnet to do some work, but there is nothing when I search the mxnet trial and example.
-
Updated
Mar 27, 2021 - Python
-
Updated
Mar 27, 2021 - Python
-
Updated
Mar 14, 2021 - Jupyter Notebook
-
Updated
Oct 20, 2020 - Jupyter Notebook
Please can you train ghostnet.
(i don't have the imagenet dataset)
-
Updated
Feb 24, 2021
cuda requirement
Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.
File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enable
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Recently HF trainer was extended to support full fp16 eval via
--fp16_full_eval. I'd have expected it to be either equal or faster than eval with fp32 model, but surprisingly I have noticed a 25% slowdown when using it.This may or may not impact deepspeed as well, which also runs eval in fp16, but we can't compare it to a baseline, since it only runs fp16.
I wonder if someone would like t