Path: utzoo!utgpu!watserv1!watmath!att!occrsh!uokmax!apple!usc!cs.utexas.edu!uwm.edu!ogicse!littlei!intelisc!hays
From: hays@isc.intel.com (Kirk Hays)
Newsgroups: comp.arch
Subject: Re: What about hardware implementations of optimized algorithms?
Message-ID: <908@intelisc.isc.intel.com>
Date: 6 Sep 90 22:55:22 GMT
References: <4047@husc6.harvard.edu> <889@dg.dg.com>
Organization: Intel Scientific Computers
Lines: 24
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
In article <889@dg.dg.com> publius@dg-gw.dg.com (Publius) writes:
>In article <4047@husc6.harvard.edu> cleary@husc9.UUCP (Kenneth Cleary) writes:
>>Knuth talks about getting faster multiplication than order-n**2 behavior.
>>He describes some methods of getting behavior of n**(lg3) or n**1.585
>
>
>There is an algorithm that can do even faster than O(n**1.585).
>I don't have the reference in front of me now and I don't remember
>the exact time complexity. But it can be found in Aho, Ullman, and Hopcroft's
>book on algorithms. That algorithm is based on number-theoretic FFT.
Looking in my copy of Aho, Ullman, and Hopcroft (ISBN 0-201-0023-7),
they discuss multiplication on pages 307-310, and claim the presented
algorithm is O(n**1.59), which is sufficiently close to O(n**1.585) to
lead me to believe that they are discussing the same algorithm as
Knuth. I don't have my Knuth handy to verify this.
Have you a different reference for multiplication faster than O(n**1.59)?
--
Kirk Hays - NRA Life. Dulce et Decorum est, pro patra mori.
"The way of the samurai is found in death. It is this simple. If you
can accept it then you will fight as though you are already dead."
Tsunemoto Yamamoto, _Hagakure_