programming: Mathematics in which an essentially unlimited number of digits after the decimal may be used. Mathematically speaking these are "real" numbers (as opposed to integers); you will want to have a coprocessor if you need to do a lot of floating point calculations for applications such as 3-D modeling.

a digital representation of a number with a specified number decimal places, or fractional part, used to represent real numbers; contrast with integer

How real numbers are stored on a computer. Numbers are stored as a sign (, +1 or -1), a mantissa (, ), and an exponent () in a similar form to , e.g. 512.43 in base 10 would be . The number of bytes used for each part is larger for doubles than single floats, meaning operations are more accurate. Floating point works well but you can get big problems if you aren't careful (e.g. don't subtract large numbers from each other, don't compare floating point numbers to be an exact number). There are long documents on floating point numbers (see the IEEE standard). See for very useful information if you're doing numerical work http://docs.sun.com/db?p=/doc/800-7895

a method of representing numbers in which the decimal (binary) point is permitted to "float" to different locations within the number. Some of the bits devoted to storing the number are devoted to keeping track of the point.

This is a particular way of representing numbers on computers similar to the scientific notation for writing numbers (e.g. 0.31415 x 101 also written as 0.31415E1 by computers where `E1' means `x 101'). It is almost universally used to represent numbers that are not integers (0,1,2..). Operations such as addition, multiplication and division using these number are usually the most time consuming parts of scientific computations and the number of such operations that can be performed each second is used as a (crude) measure of computer performance. See FLOPS.

an approximate representation of real numbers in computer hardware.

As the name implies, floating point numbers are numbers that contain floating ...

One of several methods of storing numbers on an IBM mainframe and most other computers. Similar to scientific notation, such as 3.75 times 10 to the 25th power, only it is 2 or 16 that is taken to some power. For example, short floating point on the mainframe is a sign bit followed by a 7 bit exponent then 3 bytes of mantissa. To make matters even more confusing, the exponent is stored in Excess-64 notation. All of which means you have to subtract 64 from the exponent to get the power to which 16 must be taken, times the value of the mantissa as if it had a decimal point in front of it. Not something you want to figure out from a hexadecimal dump. Especially without a hex calculator.

Floating point representation is the most common solution used to represent numbers in a computer. It represents real numbers in scientific notation. Scientific notation represents numbers as a base number and an exponent. For example, 123.456 could be represented as 1.23456 x 102. In hexadecimal, the number 123.abc might be represented as 1.23abc x 162. Floating-point representation - the most common solution - basically represents reals in scientific notation. Scientific notation represents numbers as a base number and an exponent. For example, 123.456 could be represented as 1.23456 x 102. In hexadecimal, the number 123.abc might be represented as 1.23abc x 162.

A value having a decimal and fractional part.

A type of literal value that represents a number including a fractional portion (that is, the portion after a decimal place).

a very precise decimal (non-integer) number in your computer

The representation of numbers in terms of mantissa and base raised to some power so that the decimal point floats during calculations; scientific notation.

The usual means of storing very large or very small numbers, or numbers with a fractional part; when encoded in machine-readable form, the method involves a sign bit (positive or negative), a fractional part (the mantissa), and an exponent (in base 2). Other machine encodings for numbers include integer (the most common), fixed point, and plain text.

A variety of real number which is represented to a specified number of significant figures regardless of how big or small it is.

A three-part representation of a number that contains a decimal point. The number is represented first by the sign, then the number itself, then decimal...

Also called "scientific notation." Representing a quantity by two numbers: one showing the digits of the number, the other its magnitude. The second number tells where the decimal point is to be placed in the first number (binary point if using base two).

a floating point number is a real number, being a number that can contain a fractional part. The term â€˜floating pointâ€(tm) is derived from the fact that there is no fixed number of digits before or after the decimal point; that is, the decimal point can float

A type of arithmetic where the decimal point can appear anywhere in the numbers (i.e., it can float around). An example of such numbers would be 1.0, 23.97, and 1.45678E+04. Floating point numbers take up much more memory than fixed point (integer) numbers.

A way of representing real numbers (that is, values with fractions or decimals) in 32 bits or 64 bits. Floating-point representation is useful to describe very small or very large numbers.

Floating-point is a numeral-interpretation system in which a string of digits (or bits) represents a real number. A system of arithmetic is defined that allows these representations to be manipulated with results that are similar to the arithmetic operations over real numbers. The representation uses an explicit designation of where the radix point (decimal point, or, more commonly in computers, binary point) is to be placed relative to that string.