gpu
Here are 1,529 public repositories matching this topic...
If a block selection includes the end of a line, not only the block selection but also the newline characters from the end of the line are inserted.
Additionally, when the selection goes from the start of the line to the end of the line and a line wraps around, the line is not copied correctly but instead broken across multiple lines.
It should be fairly simple to fix this, by only appending
-
Updated
Feb 21, 2020 - Jupyter Notebook
Issue Description
The following test fails for seed = 0 but passes for (as far as I can tell) any other seed (e.g. seed = 1)
https://gist.github.com/orausch/9a42e24b782319447a515e8c29b364a0
Version Information
Please indicate relevant versions, including, if relevant:
- Deeplearning4j version:
beta6 - Platform information (OS, etc): Ubuntu 19.10
(cc @rpatra)
The steps for updating the repository keys for RHEL-based distributions in https://nvidia.github.io/nvidia-docker/ should read:
$ DIST=$(sed -n 's/releasever=//p' /etc/yum.conf)
$ DIST=${DIST:-$(. /etc/os-release; echo $VERSION_ID)}
$ sudo rpm -e gpg-pubkey-f796ecb0
$ sudo gpg --homedir /var/lib/yum/repos/$(uname -m)/$DIST/*/gpgdir --delete-key f796ecb0
$ sudo gpg --homedir /var/lib/This: http://gpu.rocks/getting-started/
is over 9000 times better then
This: http://gpujs.github.io/dev-docs/
Problem is the former is manually created, the later is generated.
And so far from every JS documentation engine i use, they have quirks, or something off about it. NaturalDocs for all its "old-fashion" looks, works.
Basically we need to replace this or fix it.
-
Updated
Feb 21, 2020 - JavaScript
Sphinx (2.2.1 or master) produces the following two kinds of warnings in my environment.
duplicate object description
I think cross refs to such object is ambiguous.
autosummary: stub file not found
There are
chainer.dataset.Converterbase class andchainer.dataset.converterdecorator.
Therefore the filesystem has to allow to store `chainer.dataset.Conver
While the docs builds just fine in the CI. There are a bunch of warnings being generated. Se the logs in http://ci.tvm.ai:8080/job/tvm/job/master/458/execution/node/304/log/ for example
We do not need to fix all the categories(e.g. the warning about image scale can simply be ignored), but some other warnings may affect the result of rendered document(in the case of bad format of embedded rst).
-
Updated
Feb 21, 2020 - Jupyter Notebook
-
Updated
Feb 21, 2020 - Python
-
Updated
Feb 21, 2020 - Jsonnet
I'm working closely with a team from RAPIDS organization and I would like to port the following for cupy from numpy/scipy -
- numpy.convolv (https://numpy.org/devdocs/reference/generated/numpy.convolve.html)
- numpy.roots (https://numpy.org/devdocs/reference/generated/numpy.roots.html)
- numpy.polyval (https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyval.html)
- [
I receive this error message:
Compiling wgpu v0.4.0 (/Users/da1/Sources/rust/wgpu-rs)
Finished dev [unoptimized + debuginfo] target(s) in 6.35s
Running `target/debug/examples/cube`
2019-11-25 14:56:16.620 cube[16426:135436] -[MTLIGAccelBuffer addDebugMarker:range:]: unrecognized selector sent to instance 0x7fdfe35a14c0
[2019-11-25T13:56:16Z ERROR relevant] Values of this ty
-
Updated
Feb 21, 2020 - Python
trying to invoke GPU tutorial on macOS Catalina, errors with the following output:
Running pipeline on CPU:
Running pipeline on GPU:
Target: x86-64-osx-avx-avx2-f16c-fma-metal-sse41
Testing GPU correctness:
Error: Metal: cannot allocate system default device.
Abort trap: 6
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
The iloc in cudf documentation for 0.12 and 0.13 shows:
>>> df = DataFrame([('a', list(range(20))),
... ('b', list(range(20))),
... ('c', list(range(20)))])
https://rapidsai.github.io/projects/cudf/en/0.12.0/api.html#cudf.core.dataframe.DataFrame.iloc
https://rapidsai.github.io/projects/cudf/en/0.13.0/api.html#cudf.core.dataframe.DataFrame.iloc
which
Pythonic FITS reader
I'm copying over an issue from rapidsai/cudf#2821 by @profjsb . Hopefully this is in-scope for DALI.
This is a request for a GPU FITS reader. Such a reader will be a welcomed and critical component as the community starts to transition data pipelines from CPU- to GPU-centric workflows.
The common image exchange format in astronomy is FITS (Flexible Image Transport
-
Updated
Feb 15, 2020 - ActionScript
Hi,
I try to understand Deepdetect right now, starting with the Plattforms Docker container.
It looks great on pictures, but I have a hard time right now using it :)
My Problem: The docs seems to step over important points, like using JupyterLab. All examples shows the finished Custom masks, but how do I get them?
Is there something missing in the docs?
Example: https://www.deepdetec
Ideally we could remove the init_dist_required flag from deepspeed initialize if we can detect if it’s already been started. This can prevent some types of bugs like in #65
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."


To Reproduce
Run following from jupyter lab console