Path: utzoo!attcan!utgpu!news-server.csri.toronto.edu!mailrus!purdue!mentor.cc.purdue.edu!l.cc.purdue.edu!cik
From: cik@l.cc.purdue.edu (Herman Rubin)
Newsgroups: comp.arch
Subject: Re: F.P. vs. arbitrary-precision
Message-ID: <2530@l.cc.purdue.edu>
Date: 9 Sep 90 18:34:45 GMT
References: <3755@osc.COM> <4513@taux01.nsc.com> <119244@linus.mitre.org> <6837.26e7ee92@vax1.tcd.ie>
Organization: Purdue University Statistics Department
Lines: 69
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <6837.26e7ee92@vax1.tcd.ie>, rwallace@vax1.tcd.ie writes:
> > :If they have a good FPU, using it for integer multiplication *is* a "reasonable
> > :way". Besides, a bad implementation doesn't prove anything about the basic
> > :idea of FP.
> >
> > Having a good FPU just isn't good enough. Even with IEEE 64-bit, there are
> > only 53 bits of a mantissa. So just how does one multiply two 32 bit integers
> > together using floating point? The answer is: one can't without losing bits.
> > (or by doing several multiplies on the low/high halves and combining things)
> > Futhermore, if I should need to multiply integers in this way, they must
> > first be converted to floating point, then the product must be converted back.
> > Can you say expensive???
> >
> > The four basic operations of arithmetic are +, -, x, /. Any computer that
> > can't perform them on its atomic data units [whatever the word size is]
> > is a joke.
>
> First, why do you need 32 x 32 -> 64? OK in principle 32 x 32 can give a 64
> bit answer but in practice 99% of the time you're going to be working in 32
> bits all the way and you don't want 64 bit answers (which is why C has
> int x int -> int not int x int -> long).
>
> Second, there is a huge amount of processing done which depends on integer +
> and - and fp +, -, * and / being fast, so there is hardware support for these.
> There is practically no processing done which depends on integer * and / being
> fast (accessing an array of structures doesn't count because a smart compiler
> can use shifts and adds), and don't bother giving anecdotal cases because it's
> still less than 1% of the total. Therefore chip space was not wasted on making
> these fast.
All early machines had only fixed point. Floating point hardware, no matter
how it is done, is kludged fixed point. There are pre- and post- shifts,
the necessary fixed point arithmetic is done, and the (computed in parallel)
exponent is adjusted. From this standpoint floating point is not basic. I
do not suggest that it be eliminated; it is too useful. We could make better
kludges now, and unless it is in hardware, the current practice of packing
the exponent and mantissa in one unit would be undesirable.
But this does not address the multiprecision question. Much work does not
get done if the tools are too clumsy. In doing multiple precision work, the
number must be broken up into units on which the hardware can operate. If
we only have 32x32 -> 32, are effectively limited to 16-bit units. A count
of the number of instructions requires to do a 32x32 -> 64 shows how bad it
is. Also, an algorithm designer may very well use an obviously clumsier
algorithm which is not subject to these problems caused by the inadequacy
of hardware. Look at the devices used to get multiples of pi and ln 2.
Now suppose we have to do 160-bit arithmetic to do a floating-point operation
to sufficient accuracy (3 times the current double precision). The intelligent
thing to do, hardware allowing the operations, would be 5 32-bit pieces. If
we are to struggle with the floating-point units, and the insistence on
normalization would make it a struggle, we would have to use at most 26-bit
units, and need 7 of them. Using 16 bit units would take ten. Not having
units operated on being an easily used size makes addressing, etc., much more
difficult. So to do multiple precision work reasonably easily, we must use
16 bit units. This is not what we got from the computers of old--there the
unit was 35 bits plus sign, or more, and double length products were available,
as well as double/single -> quotient and remainder. No, we are essentially in
the early PC period as far as arithmetic.
As to why these, and other things, are not in C, I presume that the founders of
C did not think of them. In general, when a limited group designs something
without getting ideas from outside, this happens. Algol was supposed to be
a universal algorithmic language, and it does not have even the hardware
operations of the machines of that time in it.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!cik(UUCP)