Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
-
Updated
May 11, 2021 - Python
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, and bit vectors using SIMD for both AVX2, AVX-512, NEON, SVE, & SVE2 📐
Half-precision floating point types f16 and bf16 for Rust.
Stage 3 IEEE 754 half-precision floating-point ponyfill
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
🎯 Accumulated Gradients for TensorFlow 2
half float library for C and for z80
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
TFLite applications: Optimized .tflite models (i.e. lightweight and low latency) and code to run directly on your Microcontroller!
Main purpose of this library is to provide functions for conversion to and from half precision (16bit) floating point numbers. It also provides functions for basic arithmetic and comparison of half floats.
Difference between one and the smallest value greater than one that can be represented as a half-precision floating-point number.
Square root of half-precision floating-point epsilon.
Add a description, image, and links to the float16 topic page so that developers can more easily learn about it.
To associate your repository with the float16 topic, visit your repo's landing page and select "manage topics."