In computing, a representation of a number that has a fixed number of digits after the decimal (or binary or hexadecimal) point. For example, a fixed-point number with four digits after the decimal point could be used to store numbers such as 1.3467, 281243.3234 and 0.1000, but would round 1.0301789 to 1.0302 and 0.0000654 to 0.0001.
Fixed-point differs from floating point in that it can exactly represent decimal fractions while still employing the base 2 arithmetic that is efficient in most computers. When floating-point representations in computers use base 2 values, they can’t exactly represent most fractions that are easily represented in base 10. For example, one-tenth (.1) and one-hundredth (.01) can be represented only approximately by base-2 floating-point representations, while they can be defined exactly in fixed-point representations by simply storing the data values multiplied by the appropriate power of 10.
Very few computer languages include support for fixed-point values, because for most applications floating-point representations are fast enough and accurate enough. Floating-point representations are more flexible because they can handle a wider range of numbers. Floating point is also slightly easier to use, because it doesn’t require programmers to specify a number of digits after the decimal point.