Grow your team on GitHub
GitHub is home to over 50 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.
Sign up
Pinned repositories
Repositories
-
triton-inference-server
The Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs.
-
NeMo
NeMo: a toolkit for conversational AI
-
waveglow
A Flow-based Generative Network for Speech Synthesis
-
VideoProcessingFramework
Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space and pixel format conversions
-
aistore
AIStore: scalable storage for AI applications
-
deepops
Tools for building GPU clusters
-
NVTX
The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resources in your applications.
-
-
DALI
A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications
-
gpu-monitoring-tools
Tools for monitoring NVIDIA GPUs on Linux
-
Megatron-LM
Ongoing research training transformer language models at scale, including: BERT & GPT-2
-
open-gpu-doc
Documentation of NVIDIA chip/hardware interfaces
-
TRTorch
Ahead-of-time compilation of TorchScript / PyTorch JIT for NVIDIA GPUs
-
PyProf
A GPU performance profiling tool for PyTorch models
-
apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
-
-
flowtron
Auto-regressive flow-based generative network for text to speech synthesis
-
DeepLearningExamples
Deep Learning Examples
-
nvidia-settings
NVIDIA driver control panel
-
tensorflow-determinism
Tracking, debugging, and patching non-determinism in TensorFlow
-
gbm-bench
A benchmark to measure performance of popular Gradient boosting algorithms against popular ML datasets.
-
yum-packaging-precompiled-kmod
NVIDIA precompiled kernel module packaging for RHEL
-
gvdb-voxels
Sparse volume compute and rendering on NVIDIA GPUs
-
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
-
-
tacotron2
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
-
DL4AGX
Deep Learning tools and applications for NVIDIA AGX platforms.
-
ai-assisted-annotation-client
Client side integration example source code and libraries for AI-Assisted Annotation SDK
-
nvidia-container-runtime
NVIDIA container runtime
-
gdrcopy
A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology