Wikipedia Article of the Day
Randomly selected articles from my personal browsing history
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have fast floating-point unit (FPU), fixed-point representations in processor based implementations are now used only in special situations, such as in low-cost embedded microprocessors and microcontrollers; in applications that demand high speed and/or low power consumption and/or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem. Examples of the latter are accounting of dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation of functions by table lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm for FPGA (Field-Programmable-Gate-Array) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support .
History
Apr 19
Subsatellite
Apr 18
LexisNexis
Apr 17
Quaternions and spatial rotation
Apr 16
Affine transformation
Apr 15
Heidi Gardner
Apr 14
Learning with errors
Apr 13
Shibboleth
Apr 12
Accounting identity
Apr 11
Watchdog timer
Apr 10
Rotation matrix
Apr 9
Three Nephites
Apr 8
Spherical coordinate system
Apr 7
Mormon folklore
Apr 6
Homogeneous coordinates
Apr 5
CMOS
Apr 4
Counter (digital)
Apr 3
Selena
Apr 2
Matter (standard)
Apr 1
Network layer
Mar 31
Indictment
Mar 30
Clock generator
Mar 29
Near-field communication
Mar 28
ISO/IEC 14443
Mar 27
Near-field communication
Mar 26
CPU multiplier
Mar 25
Stress (linguistics)
Mar 24
Diameter
Mar 23
TO-263
Mar 22
Heavenly Mother (Mormonism)
Mar 21
Harald Cramér