Path: utzoo!censor!geac!torsqnt!news-server.csri.toronto.edu!cs.utexas.edu!rice!titan.rice.edu!preston
From: preston@titan.rice.edu (Preston Briggs)
Newsgroups: comp.arch
Subject: Re: What about hardware implementations of optimized algorithms?
Message-ID: <1990Sep3.182541.19270@rice.edu>
Date: 3 Sep 90 18:25:41 GMT
References: <4047@husc6.harvard.edu> <889@dg.dg.com>
Sender: news@rice.edu (News)
Organization: Rice University, Houston
Lines: 31
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <889@dg.dg.com> publius@dg-gw.dg.com (Publius) writes:
>In article <4047@husc6.harvard.edu> cleary@husc9.UUCP (Kenneth Cleary) writes:
>> Donald E. Knuth, The Art of Computer Programming,
>> vol.2, Seminumerical Algorithms
>> page 278, section 4.3.3. How Fast Can We Multiply?
>>Knuth talks about getting faster multiplication than order-n**2 behavior.
>>He describes some methods of getting behavior of n**(lg3) or n**1.585
>There is an algorithm that can do even faster than O(n**1.585).
>I don't have the reference in front of me now and I don't remember
>the exact time complexity. But it can be found in Aho, Ullman, and Hopcroft's
>book on algorithms. That algorithm is based on number-theoretic FFT.
>It takes advantage of the similarity between multiplication and convolution in
>signal processing and the fact that the classic Convolution Theorem can be
>extended to the abstract algebraic structures.
It's called Sch\"{o}nhage-Strassen.
You can use it to multiply to $n$-bit numbers in
$O_{B}(n \log n \log\log n)$ steps.
Unfortunately, this is the asymptotic complexity (as $n$ becomes
very large). The key point is made in the following sentence:
"The value of $n\/$ for which many of the algorithms
described here become practical is enormous."
That is, they aren't going to help us with 64 bit multiplication.
--
Preston Briggs looking for the great leap forward
preston@titan.rice.edu