Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!sun-barr!cs.utexas.edu!wuarchive!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!aplcen!uunet!mstan!amull
From: amull@Morgan.COM (Andrew P. Mullhaupt)
Newsgroups: comp.arch
Subject: Re: Killer Micro II
Message-ID: <1619@s6.Morgan.COM>
Date: 3 Sep 90 23:03:25 GMT
References: <527@llnl.LLNL.GOV> <603@array.UUCP> <2482@l.cc.purdue.edu> <8442@fy.sei.cmu.edu>
Organization: Morgan Stanley & Co. NY, NY
Lines: 45
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <8442@fy.sei.cmu.edu>, firth@sei.cmu.edu (Robert Firth) writes:
> In article aglew@dwarfs.crhc.uiuc.edu (Andy Glew) writes:
> With respect, I disagree. In my opinion, there are already far to
> many engineers who use rotten numerical algorithms and trust to
> double precision and dumb luck; going to quadruple precision will
> merely encourage more of the same.
Well, people who compute without thinking usually get what they deserve,
but standards and well designed machines should not be an attempt at
idiot proofing. Going to quadruple precision _will_ allow certain
_fast_ algorithms to be used; such as using the overdetermined normal
equations to solve least squares problems to double precision accuracy
via accumulation of inner products in quad. (See Hanson and Lawson,
or Wilkinson for details.) This algorithm can be parallelized for
coarse grain multiprocessing, but the usual Householder QR is not so
simple. As someone who runs least squares problems which take hours
on multi-megaflop hardware, I have every sympathy for Kahan's propsed
high precision arithmetic.
>
> What I think we need is hardware interval arithmetic. When the
> printout shows them beyond dispute that the choice is between
> 50 bits of noise and 100 bits of noise, perhaps they'll spend
> more time on better algorithms and less time pushing for wrong
> answers faster.
Ummm no. There are some non-obvious problems with interval arithmetic,
perhaps the best known is that Newton's method can converge in an
entirely tame way, yet the intervals blow up. (Any iteration which
has any unstable manifold is a threat to have this property. To
bring this closer to home, this would include the simplex algorithm
for linear programming, after Smale's analysis...).
I think your problem is that you don't see those extra bits of
mantissa and exponent as memory. (What other kind of resource are they?)
This makes them available for the classical trade-off between memory
and speed. Sure, a lot of people who program computers don't know
how to write algorithms. That's no reason to make computers with
totally different arithmetic: the people who don't care to understand
today's floating point will also not care to understand tomorrow's.
On the other hand, they will usually be willing to hire someone who
does know and does care.
Later,
Andrew Mullhaupt