Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!iuvax!news!mentor.cc.purdue.edu!l.cc.purdue.edu!cik
From: cik@l.cc.purdue.edu (Herman Rubin)
Newsgroups: comp.arch
Subject: Re: Killer Micro II
Message-ID: <2497@l.cc.purdue.edu>
Date: 1 Sep 90 01:39:34 GMT
References: <527@llnl.LLNL.GOV> <603@array.UUCP> <2482@l.cc.purdue.edu>
Organization: Purdue University Statistics Department
Lines: 66
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article , meissner@osf.org (Michael Meissner) writes:
> In article <8442@fy.sei.cmu.edu> firth@sei.cmu.edu (Robert Firth)
> writes:
>
> | In article aglew@dwarfs.crhc.uiuc.edu (Andy Glew) writes:
> | >It sounds like Kahan is pushing for the 128 bit quad precision that
> | >was dropped from the IEEE FP standard. Power to him!
> |
> | With respect, I disagree. In my opinion, there are already far to
> | many engineers who use rotten numerical algorithms and trust to
> | double precision and dumb luck; going to quadruple precision will
> | merely encourage more of the same.
> |
> | What I think we need is hardware interval arithmetic. When the
> | printout shows them beyond dispute that the choice is between
> | 50 bits of noise and 100 bits of noise, perhaps they'll spend
> | more time on better algorithms and less time pushing for wrong
> | answers faster.
>
> Have we actually gotten to the point where we need that much precision
> on a day to day basis? I seem to recall that in my numerical analysis
> course 12 years ago, that it was said that your average physical
> measurement only had 3-5 digits of accuracy. This means that any
> answer received cannot be more accurate than the input. Now in order
> to avoid round off error, you certainly need more digits internally,
> but IEEE double gives something 12-14 digits. One of the problems the
> computer has introduced is too much exact numerical quantization (ie,
> the often quoted statistic that the average family has 2.4 children).
> It would seem to me that providing double the precision might not give
> any more accurate answers.
Several errors have been made here. There are many situations where
considerably more information can be obtained on output than is available
on input. There are also other cases in which an inherently ill-conditioned
cheap method is available, and the alternatives are expensive. This happens,
for example, in regression analysis where there is no choice of the "design"
matrix, or where only poor designs are possible. In this case, there ARE
usually methods available, but they are much more costly.
This can occur in other situations. It may be necessary to obtain an
integration procedure for a particular type of problem where one has
a probability distribution for which the moments are easily computed
from a few parameters, which may be inaccurate. Nevertheless, this
does not invalidate the derived procedure, and in many cases 10-20
digits of accuracy can be lost in the computations. This does not
mean that the final answer has lost any accuracy at all.
That the result has only a few digits of accuracy does not mean that
there is an easily available computational procedure of that type.
BTW, interval arithmetic is unlikely to help, unless the input data
are treated as exact. Interval arithmetic exaggerates far too much.
> There are probably groups that may need such extremes in precision,
> but are they really enough to drive the market?
Considering that the entire ALU typically costs only a small fraction
of the cost of the computer, is this a reasonable question? Something
which adds less than $100 to the cost of a high-level PC, and only a
kilobuck to a university computing system, is not extravagant. One
could integrate the fixed and floating point arithmetic units to save
costs, if this is a problem.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!cik(UUCP)