Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!swrinde!mips!pacbell.com!pacbell!osc!jgk
From: jgk@osc.COM (Joe Keane)
Newsgroups: comp.arch
Subject: Re: Killer Micro II
Message-ID: <3789@osc.COM>
Date: 11 Sep 90 10:40:46 GMT
References: <527@llnl.LLNL.GOV> <603@array.UUCP> <2482@l.cc.purdue.edu> <2497@l.cc.purdue.edu> <3755@osc.COM> <3945@bingvaxu.cc.binghamton.edu>
Reply-To: jgk@osc.COM (Joe Keane)
Organization: Versant Object Technology, Menlo Park, CA
Lines: 35
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article
stephen@estragon.uchicago.edu (Stephen P Spackman) writes:
>They have two problems; the first is slowth, and the second is that
>the damned things aren't (for technical reasons having to do with
>decidability, roughly speaking) totally ordered (there're numbers with
>no SIGN, for example, because they're unordered w.r.t 0 - but they're
>all pretty damned small!).
I like your post, but i'd like to point out that i don't think either of these
are really problems.
The matter of speed is due to current architectures. Suppose we have an
architecture which has fast support for continuations, associative lookups,
all the things you want for a good system anyway. Then we can design some
weird formats for partially-computed numbers. If we have instructions which
work on these on do whatever amount of work you can get done in a couple
cycles, everything works out very well. Unlike floating-point numbers, the
format only affects how fast the result is computed, not the value of the
result. So 1/3*3 is 1 no matter what machine you're on. In fact, i'd argue
exactly the opposite of the objection. On-demand precision is inherently
faster because you always do exactly enough work to get the result you want,
and no more.
The second problem is really a theoretical limitation which applies to all
computations. It says we can't always be sure whether two numbers are equal.
For example, suppose we compute pi by two different methods and then ask to
compare them. The answer from a system with arbitrary-precision is something
like ``I've computed them both to 100 decimal digits, and they agree this far.
Do you want to extend the computation?'' In contrast, the floating-point
system just makes up an answer, either ``The first is bigger by exactly
2^-53.'' (wrong) or ``They're exactly equal.'' (how does it know?). I don't
know about you guys, but i appreciate computer systems being honest.
Just another theoretical post from the desk of...
[I don't actually have a signature.]