Wikipedia Article of the Day
Randomly selected articles from my personal browsing history
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have fast floating-point unit (FPU), fixed-point representations in processor based implementations are now used only in special situations, such as in low-cost embedded microprocessors and microcontrollers; in applications that demand high speed and/or low power consumption and/or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem. Examples of the latter are accounting of dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation of functions by table lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm for FPGA (Field-Programmable-Gate-Array) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support .
History
Jul 26
Fundamental theorem of algebra
Jul 25
Square root of 5
Jul 24
Rainbow Series
Jul 23
AJR
Jul 22
Museum fatigue
Jul 21
Common Criteria
Jul 20
List of sovereign states by homeless population
Jul 19
Cult
Jul 18
Kolmogorov–Smirnov test
Jul 17
Bit error rate
Jul 16
Kullback–Leibler divergence
Jul 15
Mary Schmich
Jul 14
Regression testing
Jul 13
Wasserstein metric
Jul 12
Block cipher mode of operation
Jul 11
Wireless
Jul 10
Birds Aren't Real
Jul 9
Hyperacusis
Jul 8
Rip current
Jul 7
Primitive recursive function
Jul 6
Sudan function
Jul 5
Meow Mix
Jul 4
Tulsi Gabbard
Jul 3
AsciiDoc
Jul 2
Northwest Ordinance
Jul 1
Phylum
Jun 30
Taxonomic rank
Jun 29
Robbie (TV series)
Jun 28
Gödel's Loophole
Jun 27
Survival function