Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!uunet!mcsun!ukc!mucs!mshute
From: mshute@cs.man.ac.uk (Malcolm Shute)
Newsgroups: comp.arch
Subject: Re: What about hardware implementations of optimized algorithms?
Message-ID: <1701@m1.cs.man.ac.uk>
Date: 14 Sep 90 10:37:21 GMT
References: <4047@husc6.harvard.edu>
Sender: news@cs.man.ac.uk
Reply-To: mshute@cs.man.ac.uk (Malcolm Shute)
Organization: Department of Computer Science, University of Manchester UK
Lines: 65
Xref: dummy dummy:1
X-OldUsenet-Modified: added Xref
Going back to the original question in this discussion (which seems since
to have veered towards one specific multiplication example).
In article <4047@husc6.harvard.edu> cleary@husc9.UUCP (Kenneth Cleary) writes:
>It is not difficult to find a few books on optimized algorithms implemented
>as software, but I'm having difficulty finding out about hardware aspects.
Can I express some degree of surprise at this request... I thought there
was already a massive cross-traffic of ideas from hardware to software
and back again.
Normally, if something is originally implemented in one (hardware of
software) first, when it comes to re-implementing it in the other,
the initial approach is to mimic the original implementation (complete
with any leasons learned about algorithmic improvements in that first
medium).
For instance, the 'obvious' way to implement pseudo-random number
generators in software is to use the '*2' (multiply-by-two) operation
to set up shift registers in the program, just like the ring counters
in the original hardware. Or, look at the way the original unix
encryption program 'simulated' the wheels on the Enigma machine.
In the other direction, look at how floating point hardware started
by implementing what was originally performed in subroutines.
If someone subsequently publishes a paper on how to improve/replace
the algorithm in the original implemention... this will have direct
relevence to the subsequent implementation.
What I think you are more probably asking about, however, is
whether there is a body of mathematics to manipulate hardware
descriptions, just as computer science has already built up
theorems to manipulate software descriptions.
I am most familiar with the work of Dr. Sheeran (now at Glasgow
University) for designing finite state machine hardware in general,
and systolic array hardware in particular, using a language which
is an extension of Backus' FFP:
Sheeran, M. (1984).
uFP, An Algebraic VLSI Design Language,
Programming Research Group, Technical Monograph PRG-39,
Oxford University Computing Laboratory,
150 pp.
Sheeran, M. (1985).
Designing regular array architectures using higher order functions,
in J.P.Jouannaud, ed".,
Functional Programming Languages and Computer Architecture,
LNCS,
Vol. 201, pp 220-237,
Springer-Verlag, Berlin.
I know that others are working on languages which can prove theorems
about other types of hardware. Prof. Sutherland refers to such
in his Turing Award lecture:
Sutherland, I.E. (1989).
Micropipelines,
CACM,
Vol. 32, No. 6, pp 720-738.
--
Malcolm SHUTE. (The AM Mollusc: v_@_ ) Disclaimer: all