gpu
Here are 1,653 public repositories matching this topic...
Right now there's no way to set fixed colored cursor with escape sequence from inverted cursor, so it should be supported.
-
Updated
Jun 1, 2020 - Jupyter Notebook
1. Issue or feature description
Documentation on the use of --gpus flag at https://github.com/NVIDIA/nvidia-docker/wiki#i-have-multiple-gpu-devices-how-can-i-isolate-them-between-my-containers does not quote the parameters, resulting in an error:
docker: Error response from daemon: cannot set both Count and DeviceIDs on device request.
Documentation specifies:
`$ docker run --gpu
What is wrong?
The documentation for kernel constants is very scattered and incomplete. The constants feature is awesome, it needs more love...
How important is this (1-5)?
√(-2)
Support 4D arrays
-
Updated
May 29, 2020 - JavaScript
Sphinx (2.2.1 or master) produces the following two kinds of warnings in my environment.
duplicate object description
I think cross refs to such object is ambiguous.
autosummary: stub file not found
There are
chainer.dataset.Converterbase class andchainer.dataset.converterdecorator.
Therefore the filesystem has to allow to store `chainer.dataset.Conver
As documented in #5392 using the packages for tflite and tensorflow 2.1.0 the test as in the subject line segfaults. It has now been skipped in the testsuite but this needs to be fixed.
This class could be used instead of cd file https://catboost.ai/docs/concepts/input-data_column-descfile.html when creating Pool from filez. The class should have init function, methods load and save, and Pool init method should be able to use object of this class instead of cd file during initialization.
A more consistent and multi-functional global level of verbosity control,
suggest an enhancement that will see print(...) in project be converted to using the python logging. module
import logging
#Then instead of print() use either
logging.info(......)
#or
logging.debug(.....)
#or
logging.warning(....)
#or
#logging.error()
In that way verbosity can be globally
-
Updated
Jun 4, 2020 - Jupyter Notebook
In my opinion, some people might not be able to contribute to CuPy because of not having an NVIDIA GPU. But they might not know that we can build a development env on google colab(As I did here).
import os
from google.colab import drive
drive.mount('/content/drive')
os.chdir("/content/drive/My Drive/")
!git clone ht-
Updated
May 28, 2020 - Python
Panic in gfx-backend-dx12 `write_descriptor_sets()` when calling `wgpu::Device::create_bind_group`
Short info header:
- GFX version: v0.5.0
- OS: Windows 10 x64
- GPU: AMD Radeon(TM) RX Vega 10 Graphics
I'm trying to build and run rawrscope. When I run it, it calls wgpu which calls gfx-rs in DX12 mode. When I run rawrscope (the application), during startup, it calls wgpu::Device::create_bind_group, causing gfx-backend-dx12 to panic from an as
File "/root/miniconda3/bin/pipeline", line 11, in <module>
sys.exit(_main())
File "/root/miniconda3/lib/python3.7/site-packages/cli_pipeline/cli_pipeline.py", line 5734, in _main
_fire.Fire()
File "/root/miniconda3/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
Fil
It tells you to get version 3.0 of the SDK, which doesn't have libwrapper.so, so you get an unhelpful failure to find halide_hexagon_remote_load_library (because init_hexagon_runtime doesn't check if host_lib is non-null). This is hard to debug, because host_lib is null not because libhalide_hexagon_host.so isn't found or isn't in the path (it is!) but because a dependent library - libwrapper.so -
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
Is your feature request related to a problem? Please describe.
According to the Arrow spec:
Bitmaps are to be initialized to be all unset at allocation time (this includes padding).
This would imply that bits outside the range [0, size) should always be zero. However, in cuDF/libcudf, we take a more conservative approach and say that bits outside [0,size) are undefined in order to a
So, we have some existing code over at the PyTorch Ignite project that is actually pretty general and might be really handy to have in DALI: pytorch/ignite#766
In core PyTorch, you can chain transformations and FileIO together easily with the Compose() operation: https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Compose
Something like
DeepSpeed's data loader will use DistributedSampler by default unless another is provided:
If DeepSpeed is configured with model parallelism, or called from a library with a sub-group of the world processes, the default behavior of DistributedSampler is incorrect
There are issues reported lately:
soimy/msdf-bmfont-xml#8
BowlerHatLLC/feathersui-starling#1633
When Textfield set to autoSize = VERTICAL, textField's height is cropped to bodyHeight and lineGap(shoulder) is ignored.

In this case font.fnt metrics are:
fontSize: 42- `li
Hi,
I try to understand Deepdetect right now, starting with the Plattforms Docker container.
It looks great on pictures, but I have a hard time right now using it :)
My Problem: The docs seems to step over important points, like using JupyterLab. All examples shows the finished Custom masks, but how do I get them?
Is there something missing in the docs?
Example: https://www.deepdetec
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."



cc @ezyang @anjali411 @dylanbespalko