|dc.description.abstract||For many decades using a system clock has been the go-to method of timing circuits. CPUs in particular have been at least partially defined by the speed of their clock. As technology moves forward, this is proving more and more problematic. At first, clock rates increased as transistor sized reduced. Now, transistor sizes still go down while clock rates remain stable. As a result, the focus has shifted to trying to do more with each cycle. A greater emphasis has been placed on efficiency, because less power draw in each cycle means either less battery drain for mobile devices or more things that can be done within power limitations for circuits with a less transient power supply. To that end, I propose that alternative timing schemes have as yet untapped potential and warrant further industry focus and research. To demonstrate this, various methods of timing are discussed and analyzed, and a demonstration is provided for techniques that have no available statistics.
What follows is an examination of existing and new ideas in circuit timing, with a focus on microprocessors. The first method discussed involves eliminating the clock entirely. The resulting asynchronous circuits are a well studied and discussed idea, which was dismissed previously as being not worth the cost. The progress of processor design in the last few years indicates a renewed study of asynchronous circuits is warranted. The other option explored is when the clock becomes aperiodic. If this elastic clock is one whose width can change from cycle to cycle, instructions with varying worst case timing can control the clock to run a system closer to average case time. This method has not received the same attention as asynchronous circuits, so some new ideas are proposed and demonstrated for generating and utilizing elastic clocks.
Tests were run on a custom CPU design to prove the elastic clock design viable. The single-cycle processor was implemented with 45nm technology, and simulated using NanoSim. The results show that while the average power increases, the total energy required to execute the test program decreases. The savings are enough to offset the power overhead the new components require. The area overhead is 3\% or less; better, if used in more complex designs. Given the complexity of typical pipeline CPUs, the area and power savings of a single-cycle design combined with the throughput improvement shown by the test makes this an interesting alternative for low power applications. Other uses of this technology are discussed and logically analyzed.||en_US