Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!wuarchive!zaphod.mps.ohio-state.edu!rpi!leah!bingvaxu!kym
From: kym@bingvaxu.cc.binghamton.edu (R. Kym Horsell)
Newsgroups: comp.arch
Subject: Re: benchmark for evaluating extended precision (data, long)
Keywords: extended precision,multiply,benchmark,arithmetic
Message-ID: <4004@bingvaxu.cc.binghamton.edu>
Date: 13 Sep 90 18:38:10 GMT
References: <2114@charon.cwi.nl> <119983@linus.mitre.org> <3999@bingvaxu.cc.binghamton.edu> <120002@linus.mitre.org>
Reply-To: kym@bingvaxu.cc.binghamton.edu.cc.binghamton.edu (R. Kym Horsell)
Organization: SUNY Binghamton, NY
Lines: 19
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <120002@linus.mitre.org> bs@linus.mitre.org (Robert D. Silverman) writes:
[distinction of theoretical complexity of division algorithm as opposed to
complexity of particular algorithm].
>I'm talking about PRACTICAL algorithms here, not theoretical ones.
>There doesn't seem to be a recursive bisection algorithm similar to
>Karatsuba's that works for division. Methods based on FFT's are
>ineffective until one reaches thousands of bits. For numbers in the
>(say) 200 decimal digit range, ordinary long division is best, and
>that algorithm is O(n^2).
So long as you make the distinction between problem complexity,
algorithm complexity & the complexity of algorithms that you use.
Such niceties are sometimes lost on some of us who read this group.
I will check out your claim re _practicality_ of FFT algorithms and
will report results to the net; they may be of use to people
to those concened about implementation issues.
-Kym Horsell