FP8 Formats for Deep Learning

Paulius Micikevicius, Dusan Stosic, Neil Burgess, Marius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sangwon Ha, Alexander Heinecke, Patrick Judd, John Kamalu, Naveen Mellempudi, Stuart Oberman, Mohammad Shoeybi, Michael Siu, Hao Wu
2022
169 citations
10 references

Abstract

FP8 is a natural progression for accelerating deep learning training inference beyond the 16-bit formats common in modern processors. In this paper we propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings - E4M3 (4-bit exponent and 3-bit mantissa) and E5M2 (5-bit exponent and 2-bit mantissa). While E5M2 follows IEEE 754 conventions for representatio of special values, E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs. We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions. Our study covers the main modern neural network architectures - CNNs, RNNs, and Transformer-based models, leaving all the hyperparameters unchanged from the 16-bit baseline training sessions. Our training experiments include large, up to 175B parameter, language models. We also examine FP8 post-training-quantization of language models trained using 16-bit formats that resisted fixed point int8 quantization.

2 repositories
6 references

Code References

â–¶ llvm/llvm-project
1 file
â–¶ llvm/include/llvm/ADT/APFloat.h
2
L201 // layout S1E5M2 as described in https://arxiv.org/abs/2209.05433.
L214 // bit layout S1E4M3 as described in https://arxiv.org/abs/2209.05433.
â–¶ pytorch/pytorch
3 files
â–¶ docs/source/tensor_attributes.rst
2
L32 ``torch.float8_e4m3fn`` [shell]_, [1]_ 8-bit floating point, S-E-M 1-4-3, from https://arxiv.org/abs/2209.05433
L33 ``torch.float8_e5m2`` [shell]_ 8-bit floating point, S-E-M 1-5-2, from https://arxiv.org/abs/2209.05433
â–¶ torch/headeronly/util/Float8_e4m3fn.h
1
L14 /// Implementation based on the paper https://arxiv.org/pdf/2209.05433.pdf
â–¶ torch/headeronly/util/Float8_e5m2.h
1
L14 /// Implementation based on the paper https://arxiv.org/pdf/2209.05433.pdf
Link copied to clipboard!