automatic-differentiation
Here are 237 public repositories matching this topic...
I'm using TF 2.0, and I get this error when I import tangent, due to a list of non-differentiable functions that includes tf.to_float (line 60), which is deprecated:
https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/to_float
I found that function mod2pi is not implemented yet, but mod works. Is there any list of implemented functions? Minimal working example is:
using Zygote
# This is working
gradient(x -> mod(x, 2pi), 1.)
# This is not
gradient(x -> mod2pi(x), 1.)
Feature details
Most non-parametric operations have their matrix representation and eigenvalues defined as class attributes (matrix and eigvals). There are some, however, which define these directly within the _matrix and _eigvals classmethods (e.g., [S gate](https://github.com/PennyLaneAI/pennylane/blob/8e57efa4a85ea635665a44b19c9113f3f38acd3b/pennylane/ops/qubit/non_parametric_ops
-
Updated
Jul 2, 2021 - Go
-
Updated
Aug 9, 2021 - OCaml
-
Updated
Aug 13, 2021 - Nim
-
Updated
Aug 25, 2021 - C++
-
Updated
Jul 26, 2021 - Scala
-
Updated
Jul 5, 2021 - C++
-
Updated
Aug 16, 2021 - Julia
Description
Unit vector transforms should work with a zero-length unconstrained vector.
Example
The size-1 unit vector is just a constant [1]'. This should be the result of transforming the unconstrained 0-vector []'.
Expected Output
Stan programs that use:
parameters {
unit_vector[1] alpha;
}
should lead to alpha == [1]'.
Current Ver
-
Updated
Aug 28, 2021 - LLVM
-
Updated
Aug 27, 2021 - C++
-
Updated
May 10, 2018 - Haskell
Debugging Kotlin∇ code within IntelliJ IDEA can be somewhat cumbersome due to the functional API structure (lots of deeply-nested stack traces and context switching). To facilitate more user-friendly debugging, we should add support for visual debugging by exposing Kaliningraph’s built-in graph visualization capabilities. For example, the use
-
Updated
Aug 26, 2021 - Python
-
Updated
Aug 7, 2021 - Rust
-
Updated
Jun 6, 2021 - Julia
-
Updated
Nov 16, 2016 - Python
-
Updated
Jan 10, 2018 - Python
-
Updated
Aug 24, 2021 - Jupyter Notebook
The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:
>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2)) # new approachWe should upd
I suspect some of the scalar rules should be using oneunit instead of one. For example, sign.
-
Updated
Aug 8, 2021 - Julia
profiles.h updates
At the moment profiles.h (in pkg/profiles) lacks many (any?) comments. Also lots of variables are declared somewhat separately from where they are associated with heap storage.
Both these make it a bit hard to read.
It would be nicer if it was called PROFILES.h too.
-
Updated
Aug 23, 2021 - Julia
-
Updated
Jun 25, 2021 - Julia
Improve this page
Add a description, image, and links to the automatic-differentiation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the automatic-differentiation topic, visit your repo's landing page and select "manage topics."
In operations_broadcast_test.go there are some tests that are not yet filled in. The point is to test that broadcasting works for different shapes. The semantics of broadcast probably isn't clear, so please do send me a message for anything.
This is a good first issue for anyone looking to get interested