pytorch
Here are 15,953 public repositories matching this topic...
-
Updated
May 24, 2021 - Jupyter Notebook
-
Updated
May 14, 2021 - Python
-
Updated
Jun 6, 2021 - Jupyter Notebook
-
Updated
Apr 6, 2021 - Python
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
- Suggest a new feature by leaving a comment.
- Vote for a feature request with
👍 or be against with👎 . (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!) - Tell us that
-
Updated
Jun 1, 2021 - Python
-
Updated
Jun 6, 2021 - JavaScript
-
Updated
May 25, 2021 - Jupyter Notebook
🚀 Feature
Detect UninitializedParameter and run one batch/sample before fitting.
Motivation
Pytorch now accepts 'lazy' layers with UninitializedParameter.
However, this seems to cause a memory error in PL at when we start the trainer because it attempt to estimate the memory usage:
RuntimeError: Can't access the shape of an uninitialized parameter. This error usually happen
-
Updated
Jun 4, 2021 - Python
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
-
Updated
Jun 7, 2021 - Python
-
Updated
May 2, 2021
-
Updated
Jun 6, 2021 - C
-
Updated
Jun 6, 2021 - Python
-
Updated
May 16, 2021 - Jupyter Notebook
-
Updated
Jun 6, 2021 - Python
-
Updated
Jun 5, 2021 - Python
Bug Report
Is the issue related to model conversion? No
Describe the bug
DynamicQuantizeLinear function op does not have shape inference function defined. In absence of shape inference, function body is used to get the shape inference for the function op and although it works as a fallback option it hurts perf.
Expected behavior
Add shape inference function for DynamicQuan
-
Updated
Jun 6, 2021 - Python
-
Updated
Jun 7, 2021 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
-
Updated
Jun 6, 2021 - Python
I'm using mxnet to do some work, but there is nothing when I search the mxnet trial and example.
-
Updated
Jun 7, 2021 - Python
-
Updated
Mar 14, 2021 - Jupyter Notebook
-
Updated
May 2, 2021 - Jupyter Notebook
-
Updated
Jun 6, 2021 - Python
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
At the moment we cannot return a list of attention weight outputs in Flax as we can do in PyTorch.
In PyTorch, there is a
output_attentionsboolean in the forward call of every function, see here which when set to True collects