Wikipedia Article of the Day
Randomly selected articles from my personal browsing history
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have fast floating-point unit (FPU), fixed-point representations in processor based implementations are now used only in special situations, such as in low-cost embedded microprocessors and microcontrollers; in applications that demand high speed and/or low power consumption and/or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem. Examples of the latter are accounting of dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation of functions by table lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm for FPGA (Field-Programmable-Gate-Array) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support .
History
Sep 7
KHive
Sep 6
Interplanetary Internet
Sep 5
KHive
Sep 4
The Memory Police
Sep 3
Disjoint-set data structure
Sep 2
Systems engineering
Sep 1
12ft
Aug 31
Speculative fiction
Aug 30
Lace card
Aug 29
40 Eridani
Aug 28
Weird fiction
Aug 27
Dark forest hypothesis
Aug 26
Pointing and calling
Aug 25
The Maybe Man
Aug 24
Sean Astin
Aug 23
Planet of the Apes
Aug 22
Shamir's secret sharing
Aug 21
Application binary interface
Aug 20
Key encapsulation mechanism
Aug 19
Graupel
Aug 18
List of Internet top-level domains
Aug 17
Twenty One Pilots
Aug 16
The Garden of Earthly Delights
Aug 15
Curve25519
Aug 14
John Hinckley Jr.
Aug 13
Mona Lisa (Nat King Cole song)
Aug 12
2024 CrowdStrike incident
Aug 11
Tony Hoare
Aug 10
Reactor pattern
Aug 9
ElGamal encryption