Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!samsung!sdd.hp.com!decwrl!sgi!shinobu!odin!bruceh
From: bruceh@sgi.com (Bruce R. Holloway)
Newsgroups: comp.arch
Subject: Re: F.P. vs. arbitrary-precision (was: Killer Micro II)
Message-ID: <1990Sep6.224018.17701@odin.corp.sgi.com>
Date: 6 Sep 90 22:40:18 GMT
References: <3755@osc.COM> <4513@taux01.nsc.com> <119244@linus.mitre.org>
Sender: news@odin.corp.sgi.com (Net News)
Organization: Silicon Graphics, Inc., Mountain View, CA
Lines: 35
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <119244@linus.mitre.org> bs@gauss.UUCP (Robert D. Silverman) writes:
>In article <4513@taux01.nsc.com> amos@taux01.nsc.com (Amos Shapir) writes:
>:[Quoted from the referenced article by jgk@osc.COM (Joe Keane)]
>:
>:>There are a lot of machines out there that can do IEEE 64-bit floating point,
>:>with all its precise rules and cases, but can't multiply two 32-bit integers
>:>in a reasonable way. What are we to make of this? It's just dumb.
>
>The four basic operations of arithmetic are +, -, x, /. Any computer that
>can't perform them on its atomic data units [whatever the word size is]
>is a joke.
Where did you learn this? By looking at a calculator? What about sqrt()?
In my line of work we use it a lot & know lots of ways to approximate it
depending on what's available & how fast it needs to be. Still, there is
a pencil & paper method that's exact & fairly easy to do in hardware.
Perhaps I should say, any computer that can't do sqrt() is a joke!
The basic operations are + and *. That's why there's an algebraic
construct called a "field". The other two operations are just derivative
ones that involve inverses. Actually, if you did statistics on numerical
programs (maybe you have), you would find that divide occurs much less
frequently than the others. So those guys that sell multiplier-accumulator
chips are right, because even if it takes 10x longer to divide, it probably
occurs 10x less frequently anyway.
Obviously, any Turing machine can do all five of these operations. So the
issue isn't can the machine do it, or does it have a machine language
instruction to do it, or does that instruction happen in a single cycle.
I think the real issue is that the programming languages don't do the job,
because you can't multiply two ints & get a double precision result without
writing a subroutine. The compiler is free to do +, *, and sqrt with a
single instruction or with a subroutine, but the guy who wants a double
precision product or to detect overflow after addition has to write
a subroutine to do it.