|dc.description.abstract||The ubiquitous era of emerging portable devices demands long battery lifetime as a primary design goal. Subthreshold circuit design can reduce energy per cycle in an order of magnitude of nominal operating circuits by scaling power supply voltage (Vdd) below the device threshold voltage. But, it lowers significantly circuit performance as a penalty. Stringent energy budget and moderate speed requirements of ultra low power systems in the market may not be best satisfied just by scaling a single supply voltage. Optimized circuits with dual supply voltages provide an opportunity to resolve these demands.
Utilizing the time slack for dual-Vdd is a well-known technique for a circuit operating with nominal Vdd for reducing the power consumption with small extra cost in physical design. Most previous works in subthreshold circuit design only used a single supply voltage scaled down to reduce the energy consumption without considering the time slack.
We propose a method for minimum energy digital CMOS (Complementary Metal Oxide Semiconductor) circuit design using dual subthreshold supply. The delay penalty of a traditional level converter is unacceptably high when the voltages are in the subthreshold range. In this work, level converters are either not used at all or special multiple logic-level gates are used only when, after accounting for their cost, they offer advantage. Starting from a lowest energy per cycle design whose single supply voltage is in the subthreshold range, a new mixed integer linear program (MILP) finds a second lower supply voltage optimally assigned to gates with time slack. The MILP accounts for the energy and delay characteristics of logic gates interfacing two different signal levels. New types of linearized AND and OR constraints are used in this MILP. We show energy saving up to 24.5% over the best available designs of ISCAS'85 benchmark circuits.
For modern large VLSI systems, the MILP may suffer from unacceptable run-time as the MILP algorithm for dual voltage design has exponential-time complexity. Gate slack analysis gives an opportunity to reduce the time complexity as linear for assigning the optimal lower supply voltage (VDDL) to initially all higher supply voltage (VDDH) gates in a single-Vdd circuit. The slack of a gate in a digital circuit is the difference between the critical path delay and the delay of the longest path through that gate. Using the previous work on static timing analysis, we have developed a linear-time algorithm for computing the slack for all gates in a circuit.
We propose a new slack-time based algorithm for dual-Vdd design to achieve maximum energy saving. For a given lower supply voltage, we first compute slacks for all gates of the circuit and then partition them into three groups. In one group, all gates can be unconditionally assigned the low voltage. In the second group, no gate can be assigned low voltage. In the third group, low voltage assignment to any single gate will not violate the critical path timing and, therefore, the low voltage must be sequentially assigned to gates one at a time. Because all steps of the voltage assignment algorithm rely on linear-time analysis, the overall complexity of this energy optimization method is close to linear in the number of gates. We apply our algorithm to optimize ISCAS'85 benchmark circuits and compare the results with those from MILP. Energy savings from the new slack-time based algorithm is very closed to the global optimum MILP solutions. The optimization time using gate slack can be as low as 1/43 when compared to that of the MILP method for dual-Vdd design. The new slack-time based algorithm is especially beneficial for large circuits, which may contain few critical or near-critical paths and many paths with large slack.||en_US