Grow your team on GitHub
GitHub is home to over 50 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.
Sign up
Pinned repositories
Repositories
-
MinkowskiEngine
Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors
-
triton-inference-server
The Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs.
-
NVTabular
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
-
aistore
AIStore: scalable storage for AI applications
-
spark-rapids
Spark RAPIDS plugin - accelerate Apache Spark with GPUs
-
apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
-
PyProf
A GPU performance profiling tool for PyTorch models
-
deepops
Tools for building GPU clusters
-
NeMo
NeMo: a toolkit for conversational AI
-
NVTX
The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resources in your applications.
-
DeepLearningExamples
Deep Learning Examples
-
egl-wayland
The EGLStream-based Wayland external platform
-
nvtx-plugins
Python bindings for NVTX
-
-
nvidia-container-runtime
NVIDIA container runtime
-
libnvidia-container
NVIDIA container runtime library
-
jitify
A single-header C++ library for simplifying the use of CUDA Runtime Compilation (NVRTC).
-
DALI
A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications
-
nvcomp
A library for fast lossless compression/decompression on the GPU
-
TRTorch
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
-
gdrcopy
A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology
-
-
spark-xgboost-examples
XGBoost GPU accelerated on Spark example applications
-
pyxis
Container plugin for Slurm Workload Manager
-
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
-
-
libglvnd
The GL Vendor-Neutral Dispatch library
-
ais-etl
Provides for deploying custom ETL containers on AIStore, with subsequent user-defined extraction-transformation-loading in parallel, on the fly and/or offline, locally to user data.
-
jetson-gpio
A Python library that enables the use of Jetson's GPIOs
-
nvidia-container-toolkit
Build and run containers leveraging NVIDIA GPUs