Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!usc!samsung!noose.ecn.purdue.edu!mentor.cc.purdue.edu!l.cc.purdue.edu!cik
From: cik@l.cc.purdue.edu (Herman Rubin)
Newsgroups: comp.arch
Subject: Re: Killer Micro II
Summary: Multiple precision problems and exactness of arithmetic
Message-ID: <2486@l.cc.purdue.edu>
Date: 29 Aug 90 12:37:13 GMT
References: <527@llnl.LLNL.GOV> <603@array.UUCP> <2482@l.cc.purdue.edu> <632@array.UUCP>
Organization: Purdue University Statistics Department
Lines: 48
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <632@array.UUCP>, colin@array.UUCP (Colin Plumb) writes:
> In article <2482@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> > There are plenty of mathematical calculations which need lots of computing,
> > but use little data. I doubt if these vaunted machines will be much good
> > at a three-dimensional numerical integral, for example. And how good is
> > their integer arithmetic? If accurate calculation is needed, and this is
> > not all that unusual, floating point is essentially useless.
>
> They also do integer ops at the same 100 MHz rate. Why would they be no
> good at a 3-d numerical integrals? They work at Cray speeds and have
> shorter pipelines. 50 MFLOPS *scalar*. It impresses the hell out of me.
>
> Besides which, it never hurts to use FP - I can always ignore the FP exponent
> field and find myself dealing with 52-bit integers. FP inexactness only
> happens if you do things that wouldn't work exactly with integers, either.
There seems to be a sufficient ignorance of the problems on this newsgroup
to require clarification. For a very long time, arbitrary precision
arithmetic has been done by electro-chemical computers (people) using,
in effect, one-digit arithmetic. I am assuming that the readers of this
group are not totally unfamiliar with what at least used to be taught in
elementary school arithmetic :-)
There was a previous discussion about the problems with the SPARC because
integer multiplication was 32x32 -> 32. This means that to do multiple
precision arithmetic, the number must be broken into 16 bit blocks, so that
the product could be exactly obtained. With 52 bit integers, the blocks
would be 26 bits. Also, if the multiplication is done in the floating point
units, the results would have to be converted into integers, or some other
device to separate the most and least significant parts of the 52 bit result
for further purposes. This requires extra operations, and may very well
negate the larger precision of the floating point units.
I have done calculations where I used both single (48 bit) and double (96 bit)
accuracy to get an idea of the accuracy of the results, and this did not
always get enough terms. If I needed more accuracy, I could only get it
by using integer arithmetic to simulate the floating-point operations.
An unrelated point is the remark about 3-d numerical integration. This
requires lots of computation to be done in a straightforward manner, and
the problems get worse with the dimension. There are approximation methods
which can be used in higher dimensions, but rarely can get much accuracy.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!cik(UUCP)