A data encoding technique often used in computers and DSP chips to more easily deal with the complex math required to process large chunks of data. Floating Point data consists of three parts: the sign (makes it a positive or negative value), a mantissa representing a fractional value with magnitude less than one, and an exponent providing the position of the decimal point. Floating point arithmetic allows the representation of very large or very small numbers with fewer bits. For example, the number 186,000 can be represented as 1.86 * 10 to the power of 5. It may not look easier here, but in computer terms the latter expression is much easier to handle. By shifting the point so that the number of significant digits in any quantity does not exceed machine capacity, widely varying quantities can be handled with fewer actual computations. The scale factor may be fixed for each problem, or indicated along with the digits and sign for each quantity. Many computers have a special FPU (Floating Point Unit) or floating point processor in them designed specifically to carry out complex math most efficiently. This type of mathematical efficiency doesn’t really help a computer much for word processing or surfing the Internet, but when complex graphics, or audio, or video manipulation are required, the addition of an FPU can greatly speed up the computation time.