autograd
Here are 117 public repositories matching this topic...
-
Updated
Mar 14, 2021 - Jupyter Notebook
-
Updated
Apr 5, 2022 - C++
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
Display Issues
Yolo Model
Description
Implement a YOLO model and add it to the DJL model zoo
References
Issue to track tutorial requests:
- Deep Learning with PyTorch: A 60 Minute Blitz - #69
- Sentence Classification - #79
Feature details
Due to the similarity, it is easy to confuse qml.X and qml.PauliX, especially since other methods of specifying circuits, e.g., QASM, use x for PauliX. But if a user uses qml.X in their circuit on a qubit device, nothing happens to inform them that the incorrect operation is being used:
@qml.qnode(dev)
def circ():
qml.PauliX(wires=0)
qml.Hada-
Updated
Apr 14, 2022 - OCaml
-
Updated
Apr 14, 2022 - Python
-
Updated
Feb 1, 2021 - Python
-
Updated
Apr 13, 2022 - Nim
-
Updated
Apr 13, 2022 - Python
-
Updated
Mar 15, 2022
-
Updated
Sep 6, 2021 - Python
Spike-time decoding
Add a function and module that permits spike-time decoding, as suggested by @schmitts https://twitter.com/sbstnschmtthd/status/1432343373072019461
-
Updated
Feb 6, 2022 - Rust
I think it would be very useful to have learning rate schedulers
lr_cyclic()(https://arxiv.org/abs/1506.01186, Python source at https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CyclicLR), andlr_cosine_annealing_warm_restarts()(https://arxiv.org/abs/1608.03983, Python source at https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#CosineAnnealin
-
Updated
Oct 8, 2020 - Python
The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:
>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2)) # new approachWe should upd
-
Updated
Feb 12, 2022 - Julia
Okay, so this might not exactly be a "good first issue" - it is a little more advanced, but is still very much accessible to newcomers.
Similar to the mygrad.nnet.max_pool function, I would like there to be a mean-pooling layer. That is, a convolution-style windows is strided over the input, an
-
Updated
Apr 19, 2020 - Scala
-
Updated
Feb 9, 2022 - Crystal
-
Updated
Jul 1, 2019 - Python
-
Updated
Apr 28, 2017 - Lua
-
Updated
Apr 3, 2022 - Python
-
Updated
Apr 17, 2021 - Jupyter Notebook
-
Updated
Nov 10, 2021 - Swift
-
Updated
Oct 12, 2021 - Python
Improve this page
Add a description, image, and links to the autograd topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the autograd topic, visit your repo's landing page and select "manage topics."


After the revert of pytorch/pytorch@7cf9b94 we've identified a need to add a lint that checks file names to ensure that they're compatible with Windows machines.
Observed error: (from example commit)
A simple check on chang