Half-precision floating-point format

In computing, half precision is a binary floating-point computer number format that occupies 16 bits (two bytes in modern computers) in computer memory.

They can express values in the range ±65,504, with precision up to 0.0000000596046 .

In the IEEE 754-2008 standard, the 16-bit base-2 format is referred to as binary16. It is intended for storage of floating-point values in applications where higher precision is not essential for performing arithmetic computations.

Although implementations of the IEEE Half-precision floating point are relatively new, several earlier 16-bit floating point formats have existed including that of Hitachi's HD61810 DSP[1] of 1982, Scott's WIF[2] and the 3dfx Voodoo Graphics processor.[3]

Nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002.[4] ILM was searching for an image format that could handle a wide dynamic range, but without the hard drive and memory cost of floating-point representations that are commonly used for floating-point computation (single and double precision).[5] The hardware-accelerated programmable shading group led by John Airey at SGI (Silicon Graphics) invented the s10e5 data type in 1997 as part of the 'bali' design effort. This is described in a SIGGRAPH 2000 paper[6] (see section 4.3) and further documented in US patent 7518615.[7]

This format is used in several computer graphics environments including MATLAB, OpenEXR, JPEG XR, GIMP, OpenGL, Cg, Direct3D, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the increased dynamic range allows for more detail to be preserved in highlights and shadows for images. The advantage over 32-bit single-precision binary formats is that it requires half the storage and bandwidth (at the expense of precision and range).[5]

The F16C extension allows x86 processors to convert half-precision floats to and from single-precision floats.

Depending on the computer half-precision can be over an order of magnitude faster than double precision, e.g. 37 PFLOPS vs. for half 550 "AI-PFLOPS (Half Precision)".[8]

IEEE 754 half-precision binary floating-point format: binary16

The IEEE 754 standard specifies a binary16 as having the following format:

The format is laid out as follows:

The format is assumed to have an implicit lead bit with value 1 unless the exponent field is stored with all zeros. Thus only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE 754 parlance, there are 10 bits of significand, but there are 11 bits of significand precision (log10(211) ≈ 3.311 decimal digits, or 4 digits ± slightly less than 5 units in the last place).

Exponent encoding

The half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15; also known as exponent bias in the IEEE 754 standard.

  • Emin = 000012 − 011112 = −14
  • Emax = 111102 − 011112 = 15
  • Exponent bias = 011112 = 15

Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 15 has to be subtracted from the stored exponent.

The stored exponents 000002 and 111112 are interpreted specially.

ExponentSignificand = zeroSignificand ≠ zeroEquation
000002zero, −0subnormal numbers(−1)signbit × 2−14 × 0.significantbits2
000012, ..., 111102normalized value(−1)signbit × 2exponent−15 × 1.significantbits2
111112±infinityNaN (quiet, signalling)

The minimum strictly positive (subnormal) value is 2−24 ≈ 5.96 × 10−8. The minimum positive normal value is 2−14 ≈ 6.10 × 10−5. The maximum representable value is (2−2−10) × 215 = 65504.

Half precision examples

These examples are given in bit representation of the floating-point value. This includes the sign bit, (biased) exponent, and significand.

0 00000 00000000012 = 000116 =  ≈ 0.000000059604645
                              (smallest positive subnormal number)
0 00000 11111111112 = 03ff16 =  ≈ 0.000060975552
                              (largest subnormal number)
0 00001 00000000002 = 040016 =  ≈ 0.000061035156
                              (smallest positive normal number)
0 11110 11111111112 = 7bff16 =  = 65504
                              (largest normal number)
0 01110 11111111112 = 3bff16 =  ≈ 0.99951172
                              (largest number less than one)
0 01111 00000000002 = 3c0016 =  = 1
                              (one)
0 01111 00000000012 = 3c0116 =  ≈ 1.00097656
                              (smallest number larger than one)
0 01101 01010101012 = 355516 =  = 0.33325195
                              (the rounding of 1/3 to nearest)
1 10000 00000000002 = c00016 = −2
0 00000 00000000002 = 000016 = 0
1 00000 00000000002 = 800016 = −0
0 11111 00000000002 = 7c0016 = infinity
1 11111 00000000002 = fc0016 = −infinity

By default, 1/3 rounds down like for double precision, because of the odd number of bits in the significand. So the bits beyond the rounding point are 0101... which is less than 1/2 of a unit in the last place.

Precision limitations on decimal values in [0, 1]

  • Decimals between 2−24 (minimum positive subnormal) and 2−14 (maximum subnormal): fixed interval 2−24
  • Decimals between 2−14 (minimum positive normal) and 2−13: fixed interval 2−24
  • Decimals between 2−13 and 2−12: fixed interval 2−23
  • Decimals between 2−12 and 2−11: fixed interval 2−22
  • Decimals between 2−11 and 2−10: fixed interval 2−21
  • Decimals between 2−10 and 2−9: fixed interval 2−20
  • Decimals between 2−9 and 2−8: fixed interval 2−19
  • Decimals between 2−8 and 2−7: fixed interval 2−18
  • Decimals between 2−7 and 2−6: fixed interval 2−17
  • Decimals between 2−6 and 2−5: fixed interval 2−16
  • Decimals between 2−5 and 2−4: fixed interval 2−15
  • Decimals between 2−4 and 2−3: fixed interval 2−14
  • Decimals between 2−3 and 2−2: fixed interval 2−13
  • Decimals between 2−2 and 2−1: fixed interval 2−12
  • Decimals between 2−1 and 2−0: fixed interval 2−11

Precision limitations on decimal values in [1, 2048]

  • Decimals between 1 and 2: fixed interval 2−10 (1+2−10 is the next largest float after 1)
  • Decimals between 2 and 4: fixed interval 2−9
  • Decimals between 4 and 8: fixed interval 2−8
  • Decimals between 8 and 16: fixed interval 2−7
  • Decimals between 16 and 32: fixed interval 2−6
  • Decimals between 32 and 64: fixed interval 2−5
  • Decimals between 64 and 128: fixed interval 2−4
  • Decimals between 128 and 256: fixed interval 2−3
  • Decimals between 256 and 512: fixed interval 2−2
  • Decimals between 512 and 1024: fixed interval 2−1
  • Decimals between 1024 and 2048: fixed interval 20

Precision limitations on integer values

  • Integers between 0 and 2048 can be exactly represented (and also between −2048 and 0)
  • Integers between 2048 and 4096 round to a multiple of 2 (even number)
  • Integers between 4096 and 8192 round to a multiple of 4
  • Integers between 8192 and 16384 round to a multiple of 8
  • Integers between 16384 and 32768 round to a multiple of 16
  • Integers between 32768 and 65519 round to a multiple of 32[9]
  • Integers above 65519 are rounded to "infinity" if using round-to-even, or above 65535 if using round-to-zero, or above 65504 if using round-to-infinity.

ARM alternative half-precision

ARM processors support (via a floating point control register bit) an "alternative half-precision" format, which does away with the special case for an exponent value of 31 (111112).[10] It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs; instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008.

Uses

Hardware and software for machine learning or neural networks tend to use half precision: such applications usually do a large amount of calculation, but don't require a high level of precision.

On older computers that access 8 or 16 bits at a time (most modern computers access 32 or 64 bits at a time), half precision arithmetic is faster than single precision, and substantially faster than double precision. On systems with instructions that can handle multiple floating point numbers with in one instruction, half-precision often offers a higher average throughput.[11]

gollark: (it's still listed in the server to server stuff)
gollark: <@231856503756161025> onstat says your IRC server is down, and I cannot actually connect.
gollark: Oh, heav did that. I should restart some of my webcrawler initiatives.
gollark: Alas.
gollark: Obviously, citrons is a russian propaganda operator.

See also

References

  1. "hitachi :: dataBooks :: HD61810 Digital Signal Processor Users Manual". Archive.org. Retrieved 2017-07-14.
  2. Scott, Thomas J. (March 1991). "Mathematics and Computer Science at Odds over Real Numbers". SIGCSE '91 Proceedings of the twenty-second SIGCSE technical symposium on Computer science education. 23 (1): 130–139.
  3. "/home/usr/bk/glide/docs2.3.1/GLIDEPGM.DOC". Gamers.org. Retrieved 2017-07-14.
  4. "vs_2_sw". Cg 3.1 Toolkit Documentation. Nvidia. Retrieved 17 August 2016.
  5. "OpenEXR". OpenEXR. Retrieved 2017-07-14.
  6. Mark S. Peercy; Marc Olano; John Airey; P. Jeffrey Ungar. "Interactive Multi-Pass Programmable Shading" (PDF). People.csail.mit.edu. Retrieved 2017-07-14.
  7. "Patent US7518615 - Display system having floating point rasterization and floating point ... - Google Patents". Google.com. Retrieved 2017-07-14.
  8. "About ABCI - About ABCI | ABCI". abci.ai. Retrieved 2019-10-06.
  9. "Mediump float calculator". Retrieved 2016-07-26. Half precision floating point calculator
  10. "Half-precision floating-point number support". RealView Compilation Tools Compiler User Guide. 10 December 2010. Retrieved 2015-05-05.
  11. Ho, Nhut-Minh; Wong, Weng-Fai (September 1, 2017). "Exploiting half precision arithmetic in Nvidia GPUs" (PDF). Department of Computer Science, National University of Singapore. Retrieved July 13, 2020. Nvidia recently introduced native half precision floating point support (FP16) into their Pascal GPUs. This was mainly motivated by the possibility that this will speed up data intensive and error tolerant applications in GPUs.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.