Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!uunet!munnari.oz.au!bruce!zik From: zik@bruce.cs.monash.OZ.AU (Michael Saleeba) Newsgroups: comp.arch Subject: Totally asynchronous computers Message-ID: <3523@bruce.cs.monash.OZ.AU> Date: 2 Jan 91 06:26:03 GMT Distribution: comp Organization: Monash Uni. Computer Science, Australia Lines: 44 When I first started learning about computer architecture one thing struck me as an incredibly obvious improvement - asynchronism. At that time I was messing around with 6800s and such things where one clock cycle was the same as one bus cycle. This seemed pretty silly since some operations didn't even use the bus, yet they still had to hang around for an entire microsecond. It seemed that a sensible architecture would only wait for as long as was warranted, not some arbitrary time. When the 68k came along I was pleased to see that it had provision for asynchrnous bus timing. Essentially this was due to the processor clock being so much faster than the usual bus cycle. Even so, most designs used synchronous circuits to simplify design. And these days asynchronism has basically gone by the wayside in favour of things like burst modes. Still, the basic concept still applies. Why not design a processor without any sort of clock at all? A processor which is based on the thought that you shouldn't have to wait any longer than absolutely necessary for _anything_. So your ALU would have x inputs, y outputs, and also timing inputs and outputs which wait until all operands became available, and then cause a delay of only the minimum length of time necessary to complete that particular operation. An entire CPU based on this system would be pretty complex, but surely todays's >1 million transistor pipelined, cached, etc devices would exceed this complexity greatly. Take this a step further and you find yourself in wierd territory. What if you wait only as long as it takes for outputs to settle, rather than waiting for the rated delay of the device? In this way you could have chips rated on their individual ability, rather than lumped into x-MHz categories. If your 100ns RAM responded in an average of 85ns, you'd reap the benefit! And your machine wouldn't crash on the odd occasion when things took longer. Of course this idea had quite a few problems... but it'd be exciting! It'd be really nice to be able to accelerate your machine by just popping in a faster processor or faster RAM and watching things just happen faster without any extra twiddles. Now I'm aware of quite a few reasons why totally asynchronous machines haven't been made much, but I can think of work-arounds to nearly all of them. Would anyone like to offer a concrete reason why this system is so little used? Or mention some machines that have used similar systems? ------------------------------------------------------------------------------ Michael Saleeba - sortof postgraduate student - Monash University, Australia zik@bruce.cs.monash.edu.au