Skip to content
#

autograd

Here are 86 public repositories matching this topic...

kurtamohler
kurtamohler commented Jan 20, 2021

🚀 Feature

Add support for torch.max with:

  • CUDA bfloat16
  • CPU float16 and bfloat16

Motivation

Currently, torch.max has support for CUDA float16:

>>> torch.rand(10, dtype=torch.float16, device='cuda').max()
tensor(0.8530, device='cuda:0', dtype=torch.float16)

But all three other combinations of CPU/CUDA and float16/bfloat16 are not supported:

>>> torch.ra
norse
Jegp
Jegp commented Nov 20, 2020

It would be helpful to have visualisation tools to plot/debug information in the SNNs. I also think we should be slightly careful to use correct/adequate abstractions since it is 1) not the primary purpose of Norse, and 2) we don't want to maintain something that will change a lot in the future.

Here are a few suggestions for visualisations:

  • Layer parameters
    • Weights / tuning curves

Improve this page

Add a description, image, and links to the autograd topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the autograd topic, visit your repo's landing page and select "manage topics."

Learn more