Parsing gigabytes of JSON per second
-
Updated
Jun 7, 2023 - C++
Parsing gigabytes of JSON per second
TypeScript ORM that feels like writing SQL
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
Performance-portable, length-agnostic SIMD with runtime dispatch
The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
C++ image processing and machine learning library with using of SIMD: SSE, AVX, AVX-512, AMX for x86/x64, VMX(Altivec) and VSX(Power7) for PowerPC, NEON for ARM.
C++ wrappers for SIMD intrinsics and parallelized, optimized mathematical functions (SSE, AVX, AVX512, NEON, SVE))
SIMD Vector Classes for C++
Roaring bitmaps in C (and C++), with SIMD (AVX2, AVX-512 and NEON) optimizations
Fast and exact implementation of the C++ from_chars functions for float and double types: 4x to 10x faster than strtod, part of GCC 12 and WebKit/Safari
Fast inference engine for Transformer models
A translator from Intel SSE intrinsics to Arm/Aarch64 NEON implementation
Native Go version of HighwayHash with optimized assembly implementations on Intel and ARM. Able to process over 10 GB/sec on a single core on Intel CPUs - https://en.wikipedia.org/wiki/HighwayHash
Add a description, image, and links to the neon topic page so that developers can more easily learn about it.
To associate your repository with the neon topic, visit your repo's landing page and select "manage topics."