Path: utzoo!utgpu!watserv1!watmath!att!pacbell!pacbell.com!ucsd!swrinde!cs.utexas.edu!sun-barr!newstop!texsun!convex!convex.COM
From: patrick@convex.COM (Patrick F. McGehearty)
Newsgroups: comp.arch
Subject: Re: Killer Micro II
Message-ID: <105568@convex.convex.com>
Date: 29 Aug 90 16:43:29 GMT
References: <2482@l.cc.purdue.edu> <1990Aug29.005329.13598@uncecs.edu>
Sender: usenet@convex.com
Reply-To: patrick@convex.COM (Patrick F. McGehearty)
Organization: Convex Computer Corporation, Richardson, Tx.
Lines: 28
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <1990Aug29.005329.13598@uncecs.edu> urjlew@uncecs.edu (Rostyk Lewyckyj) writes:
>
>One detail that should not be overlooked in this discussion of fp.
>precision, is that the 64 bits used to represent your number is
>subdivided into sign + exponent + fraction. So a 64 bit fp number
>gives you only between 48 and 56 bits of fraction. (56 bits for
>the IBM 360 architecture, and I believe 48 for a CRAY and most
>other base 2 machines). IEEE is what? 80 bits divided up into
>1+15+64 ? So it really takes 80 bit fp for 64 bits of precision.
Actually, there are several IEEE extended precision specifications
for different numbers of bits.
For IEEE, the 32 bit representation includes 23 represented bits and an
implicit 1 bit in the 24th position for the mantissa. The exponent is
represented by 8 bits (10**-38 to 10**+38) and a sign bit.
The 64 bit representation has 52+1 bits for the mantissa and 11 bits for
the exponent, for a range of 10**-308 to 10**+308.
I don't have the full spec, does anyone know the other IEEE representation
patterns?
For those of you not into Numerical Analysis, there are series of
computations which will give radically different results with only
single bit changes in the double precision input data. Computations
of this sort are called numerically unstable.
For example,
d=a*(b-c)
where b = c +/- epsilon for small epsilon can change the sign of the
result.