|
Last update(s): Sat Oct 22 17:17:26 2005 Thu Mar 23 14:04:08 2017
What is multiple-precision arithmetic?
It is arithmetic carried out in software at higher precision than provided by hardware.
Since the mid-1980s, essentially all new desktop and smaller computers have implemented arithmetic as defined by the ANSI/IEEE 754-1985 Standard for Binary Floating-Point Arithmetic, usually abbreviated to IEEE 754. This provides for 32-bit, 64-bit, and optionally, either 80-bit or 128-bit formats. These encode a 1-bit sign, a biased power-of-two exponent (8, 11, 15, and 15 bits respectively), and a significand (24, 53, 64, and 113 bits respectively) capable of representing approximately 7, 15, 19, and 34 decimal digits respectively.
Although the IEEE 754 hardware precisions suffice for many practical purposes, there are many areas of computation where higher precision is required.
Who/what needs multiple-precision arithmetic?
Three simple examples where higher-precision arithmetic is required are in the conversion between decimal and binary number bases, the computation of exactly rounded elementary functions, and the computation of vector dot products:
Two recent books, Experimentation in mathematics: computational paths to discovery (ISBN 1-56881-136-5) and Mathematics by Experiment: Plausible Reasoning in the 21st Century (ISBN 1-56881-211-6), show how high-precision computation can lead to fundamental new discoveries in mathematics, and be essential for the solution of some important physical problems.
What programming languages provide native multiple-precision arithmetic?
The Axiom, Maple, Mathematica, Maxima, MuPAD, PARI/GP, and Reduce symbolic-algebra languages, the Unix bc and dc calculators, and the python and rexx scripting languages, all provide such a facility and make it easy to use. There are separate BigFloat packages available for the perl and ruby scripting languages.
For example, in Maple, you can change the decimal precision at any time by a simple assignment to a global variable, Digits := 100;, without making any other changes to your program. All subsequent arithmetic, and all of the built-in functions, are then computed to the specified precision.
If you need multiple-precision arithmetic for experimental code that you are developing, these languages are likely to prove most convenient. However, because the arithmetic is performed in software, and code is interpreted, rather than compiled, run times can be hundreds, or even thousands, of times slower than they would be in a compiled language using hardware arithmetic.
Java and C# provide a BigDecimal data type, but it supports only fixed-point arithmetic, not floating-point arithmetic, and library support beyond the basic operations (add, subtract, multiply, and divide) is nonexistent. Its utility is limited, and it is primarily useful only for simple computations in business accounting.
What programming languages provide nonnative multiple-precision arithmetic?
If you need high-precision arithmetic in a traditional programming language, such as Fortran, C, or C++, life can be much more difficult. The Brent MP package (ACM Trans. Math. Software 4(1), 57--70 (1978), the Bailey mpfun77 package, the GNU gmp package, the French LORIA mpfr package, and the Moshier extended double package, all provide libraries of routines, but you must use function calls for coding of all numerical and I/O operations that involve multiple-precision data.
The Ada, C++, and Fortran 90/95 languages support user-definable data types and operator overloading. This makes it possible to define libraries that allow you to code numerical expressions in the conventional way, such as a = b * c + d / sqrt(e). Two such libraries for Fortran 90/95 are the Bailey mpfun90 package, and the Schonfelder vpa package. The Bailey arprec package offers similar support for C++.
Regrettably, none of these packages provides the standard elementary functions, and that deficiency remains a great weakness, because those functions are very likely to be required.
If you need these libraries, please consult with systems staff for advice and instruction.
Where can I learn more about floating-point arithmetic in general, and multiple-precision arithmetic in particular?
Consult some of the recent books listed in the extensive fparith floating-point arithmetic bibliography. The author of this FAQ has written a draft of a book on the subject; consult systems staff for its prepublication availability.