8-bit Numerical Formats for Deep Neural Networks

Badreddine Noune, Philip Jones, Daniel Justus, Dominic Masters, Carlo Luschi
2022
10 references

Abstract

Given the current trend of increasing size and complexity of machine learning architectures, it has become of critical importance to identify new approaches to improve the computational efficiency of model training. In this context, we address the advantages of floating-point over fixed-point representation, and present an in-depth study on the use of 8-bit floating-point number formats for activations, weights, and gradients for both training and inference. We explore the effect of different bit-widths for exponents and significands and different exponent biases. The experimental results demonstrate that a suitable choice of these low-precision formats enables faster training and reduced power consumption without any degradation in accuracy for a range of deep learning models for image classification and language processing.

2 repositories
6 references

Code References

â–¶ llvm/llvm-project
1 file
â–¶ llvm/include/llvm/ADT/APFloat.h
2
L204 // and bit layout S1E5M2 described in https://arxiv.org/abs/2206.02915,
L219 // and bit layout S1E4M3 described in https://arxiv.org/abs/2206.02915,
â–¶ pytorch/pytorch
3 files
â–¶ docs/source/tensor_attributes.rst
2
L34 ``torch.float8_e4m3fnuz`` [shell]_, [1]_ 8-bit floating point, S-E-M 1-4-3, from https://arxiv.org/pdf/2206.02915
L35 ``torch.float8_e5m2fnuz`` [shell]_, [1]_ 8-bit floating point, S-E-M 1-5-2, from https://arxiv.org/pdf/2206.02915
â–¶ torch/headeronly/util/Float8_e4m3fnuz.h
1
L17 /// Implementation based on the paper https://arxiv.org/pdf/2206.02915.pdf and
â–¶ torch/headeronly/util/Float8_e5m2fnuz.h
1
L17 /// Implementation based on the paper https://arxiv.org/pdf/2206.02915.pdf and
Link copied to clipboard!