pytorch
Here are 10,433 public repositories matching this topic...
-
Updated
Jun 1, 2020 - Jupyter Notebook
-
Updated
Jun 1, 2020 - Jupyter Notebook
-
Updated
May 22, 2020 - Python
Hi, is there any plan to provide a tutorial of showing an example of employing the Transformer as an alternative of RNN for seq2seq task such as machine translation?
For some reason, when I open the web document, real_a and fake_b are matching, but the real_b is from another image; however in the images folder the images are correct. Does someone know why does this happen?
-
Updated
May 20, 2020 - Jupyter Notebook
-
Updated
May 24, 2020
-
Updated
Jun 5, 2020 - Python
-
Updated
Jun 5, 2020 - JavaScript
Example scripts contains some dependencies not listed for Horovod, and in some cases require datasets without explaining how to obtain them. We should provide a README file along with a set of packages (requirements.txt) for successfully running the examples.
-
Updated
May 31, 2020 - Jupyter Notebook
-
Updated
Jun 4, 2020 - Jupyter Notebook
I tried selecting hyper parameters of my model following "Tutorial 8: Model Tuning" below:
https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_8_MODEL_OPTIMIZATION.md
Although I got the "param_selection.txt" file in the result directory, I am not sure how to interpret the file, i.e. which parameter combination to use. At the bottom of the "param_selection.txt" file, I found "
Feature request: separate logging for model computed loss and regularization loss in tensorboard
It would be nice to separately log model computed loss from regularization loss in tensorboard. Involves minor changes to the Trainer.
Several parts of the op sec like the main op description, attributes, input and output descriptions become part of the binary that consumes ONNX e.g. onnxruntime causing an increase in its size due to strings that take no part in the execution of the model or its verification.
Setting __ONNX_NO_DOC_STRINGS doesn't really help here since (1) it's not used in the SetDoc(string) overload (s
❓ Questions and Help
I followed the fine-tuning example described in here: https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md
However I didn't manage to reproduce the results described in the paper for EN-RO translation.
How to reproduce fine tuning with mbart?
- Can you clarify where did you get the data and how did you preprocess it for training in more de
The documentation about edge orientation is inconsistent. In the Creating Message Passing Networks tutorial, the main expression says that e𝑖,𝑗 denotes (optional) edge features from node 𝑖 to node 𝑗., the attached expression also suggests it. However, in documentation to MessagePassing.message(), the documentation says Constructs messages from node 𝑗 to node 𝑖 (this is actually true).
I
-
Updated
Jan 5, 2020 - Jupyter Notebook
Excuse me, https://github.com/graykode/nlp-tutorial/blob/master/1-1.NNLM/NNLM-Torch.py#L50 The comment here may be wrong. It should be X = X.view(-1, n_step * m) # [batch_size, n_step * m]
Sorry for disturbing you.
Describe the bug
I try to run tensorboardX/examples/demo_graph.py for jupyter notebook (launched by anaconda navigator) and I get the error seen at Additional context.
I just copy paste the code to notebook from Github.
Minimal runnable code to reproduce the behavior
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
this doesn't seem very well documented at present.
-
Updated
Jan 31, 2019 - Python
Let's enable loading weights from a URL directly
Option 1:
Automate it with our current API
Trainer.load_from_checkpoint('http://')Option 2:
Have a separate method
Trainer.load_from_checkpoint_at_url('http://')Resources
We can use this under the hood:
(https://pytorch.org/docs/stable/hub.html#torch.hub.load_state_dict_from_url)
Any tho
Describe the bug
The test_torch_tanh_approx test fails intermittently during automated PR testing.
To Reproduce
Run the test (or full suite) until it fails.
Screenshots
2020-04-24T13:00:44.9923763Z method = 'sigmoid', prec_frac = 3, tolerance = 0.1
2020-04-24T13:00:44.9925054Z workers = {'alice': <VirtualWorker id:alice #objects:112>, 'bob': <VirtualWorker id:bob #
-
Updated
Jun 5, 2020 - Python
-
Updated
Jun 5, 2020 - Python
Can someone explain how dimensions of the anchor boxes are calculated from anchor ANCHOR_SCALES and ANCHOR_RATIOS? How do they relate to generating 1:1, 1:2 or 2:1 aspect ratio anchor boxes with box areas 128^2, 256^2 as mentioned in the Faster RCNN paper?
Sorry to bother you.
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Many models have identical implementations of
prune_headsit would be nice to store that implementation as a method onPretrainedModeland reduce the redundancy.