#
cublas
Here are 42 public repositories matching this topic...
Deep Learning library using GPU(CUDA/cuBLAS)
-
Updated
Aug 22, 2020 - Elixir
Algorithms implemented in CUDA + resources about GPGPU
-
Updated
Jul 7, 2018 - Cuda
code for benchmarking GPU performance based on cublasSgemm and cublasHgemm
-
Updated
Jul 7, 2017 - Cuda
Lab exercise of Parallel Processing course in NTUA regarding CUDA programming
-
Updated
Mar 3, 2020 - Cuda
Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs
machine-learning
caffe
gpu
cuda
inference
cublas
convolutional-neural-networks
sparse-matrix
cusparse
-
Updated
Feb 28, 2019 - C++
-
Updated
Feb 18, 2019 - Common Lisp
HSD: Hierarchical Spherical Defomration for Cortical Surface Registration
-
Updated
Jul 7, 2020 - C++
Generalized Orthogonal Least-Squares in CUDA
-
Updated
Apr 21, 2018 - Cuda
Matrix multiplication example performed with OpenMP, OpenACC, BLAS, cuBLABS, and CUDA
-
Updated
Jan 6, 2020 - C++
The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Intel MKL(CPU) and cuBLAS(CUDA) on different matrix sizes/vendor's hardwares/OS. Out-of-the-box easy as MSVC, MinGW, Linux(CentOS) x86_64 binary provided. 在不同矩阵大小/硬件/操作系统下比较几个BLAS库的sgemm函数性能,提供binary,开盒即用。
-
Updated
Mar 28, 2019 - C
GPGPU Inverse Distance Weighting using matrix vector multiplication
-
Updated
Dec 5, 2017 - Cuda
A neat C++ custom Matrix class to perform super-fast GPU (or CPU) powered Matrix/Vector computations with minimal code, leveraging the power of cuBLAS where applicable.
-
Updated
Jun 24, 2017 - C++
Level 3 matrix multiplication using both cublas and mkl.
-
Updated
Jul 20, 2018 - Cuda
Improve this page
Add a description, image, and links to the cublas topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cublas topic, visit your repo's landing page and select "manage topics."
Find a better (more automated) way to compare against python code.