Support for 16-Bit Floats #27
Replies: 3 comments 5 replies
-
|
Beta Was this translation helpful? Give feedback.
-
This feel so wrong even though it's working... ah... so, by tweeking this lovely operator float() const;
bool operator < (const int& i);
float16 operator / (const int& i);
float16 operator / (const long int& li);
float16 operator / (const unsigned int& ui);
float16 operator * (const double& d); (I imagine probably in the worst way I could implement them) #define DFLOAT float16
#define DFLOAT_LEN 4
// (and some castings) I was able to run a NN with |
Beta Was this translation helpful? Give feedback.
-
int-quantization solves this in a better way IMHO.... so i'm not 100% sure if I should continue with this... |
Beta Was this translation helpful? Give feedback.
-
So recently, I started playing around with tensorflow again, and realised there's
tf.keras.backend.set_floatx('float16')
. So why not?Beta Was this translation helpful? Give feedback.
All reactions