CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 4,539 public repositories matching this topic...
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Oct 24, 2023
Instant neural graphics primitives: lightning fast NeRF and more
-
Updated
Nov 19, 2023 - Cuda
kaldi-asr/kaldi is the official location of the Kaldi project.
-
Updated
Nov 13, 2023 - Shell
Open3D: A Modern Library for 3D Data Processing
-
Updated
Dec 1, 2023 - C++
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
-
Updated
Dec 1, 2023 - Python
Containers for machine learning
-
Updated
Nov 30, 2023 - Python
Go package for computer vision using OpenCV 4 and beyond.
-
Updated
Dec 1, 2023 - Go
A flexible framework of neural networks for deep learning
-
Updated
Aug 28, 2023 - Python
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
-
Updated
Dec 1, 2023 - C++
[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
-
Updated
Oct 9, 2023 - C++
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
-
Updated
Nov 10, 2023 - C
Tengine is a lite, high performance, modular inference engine for embedded device
-
Updated
Oct 30, 2023 - C++
ArrayFire: a general purpose GPU library.
-
Updated
Nov 5, 2023 - C++
A PyTorch Library for Accelerating 3D Deep Learning Research
-
Updated
Dec 1, 2023 - Python
cuML - RAPIDS Machine Learning Library
-
Updated
Dec 1, 2023 - C++
Created by Nvidia
Released June 23, 2007
- Followers
- 176 followers
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia