Minimizing N-Detect Tests for Combinational Circuits
Except where reference is made to the work of others, the work described in this thesis is
my own or was done in collaboration with my advisory committee. This thesis does not
include proprietary or classified information.
Kalyana R. Kantipudi
Certificate of Approval:
Charles E. Stroud
Professor
Electrical and Computer Engineering
Vishwani D. Agrawal, Chair
James J. Danaher Professor
Electrical and Computer Engineering
Victor P. Nelson
Professor
Electrical and Computer Engineering
George T. Flowers
Interim Dean
Graduate School
Minimizing N-Detect Tests for Combinational Circuits
Kalyana R. Kantipudi
AThesis
Submitted to
the Graduate Faculty of
Auburn University
in Partial Fulfillment of the
Requirements for the
Degree of
Master of Science
Auburn, Alabama
May 10, 2007
Minimizing N-Detect Tests for Combinational Circuits
Kalyana R. Kantipudi
Permission is granted to Auburn University to make copies of this thesis at its discretion,
upon the request of individuals or institutions and at their expense.
The author reserves all publication rights.
Signature of Author
Date of Graduation
iii
Vita
Kalyana R. Kantipudi, son of Mr. K. V. Mohana Rao and Mrs. K. Padma Ku-
mari, was born in Kovvur, Andhra Pradesh, India. He graduated from Vikas Jr. College,
Rajahmundry in 2000. He earned the degree Bachelor of Technology in Electronics and
Communication Engineering from Acharya Nagarjuna University, Guntur, India in 2004.
iv
Thesis Abstract
Minimizing N-Detect Tests for Combinational Circuits
Kalyana R. Kantipudi
Master of Science, May 10, 2007
(B.Tech., Acharya Nagarjuna University, 2004)
91 Typed Pages
Directed by Vishwani D. Agrawal
An N-detect test set detects each stuck-at fault by at least N di?erent vectors. N-
detect tests are of practical interest because of their ability to improve the defect coverage.
The main problem that limits their use is their size. Researchers have proposed various
methods to generate N-detect tests, but not much work has been done on minimizing
them. Also, there is no clear minimum size estimate of an N-detect test set.
We provide a theoretical lower bound on the size of an N-detect test set. This is derived
using the independent fault set concept proposed by Akers et al. An independent fault set
(IFS) is a set of faults in which no two faults are detectable by the same vector. A known
lower bound on the size of any single-detect test set is equal to the size of the largest IFS.
We show that the lower bound on the size of an N-detect test set is N times the size of the
largest IFS.
Given any N-detect vector set, we derive a minimal N-detect vector set from it. Integer
linear programming (ILP) is used to find the smallest set. Each vector is assigned an integer
[0,1] variable such that 1 means that the vector is included in, and 0 means that it is excluded
v
from, the minimized set. Faults are simulated without fault dropping and that result is used
to formulate the ILP model. The model consists of a single N-detection constraint for each
fault and an objective function to be minimized, which is the sum of all integer variables.
The problem is then solvable by any ILP solver program.
The ILP method, when applied to the ISCAS85 benchmark circuits, produces better
results compared to an earlier N-detect test minimization method [64]. For most circuits,
an order of magnitude reduction in the minimized test set size is achieved. For the circuit
c1355, the earlier minimization method produced a 15-detect test set of size 1274 in 5674.6
seconds, while our ILP method produced a test set of size 1260 in just 52.1 seconds. We
make two observations. First, the ILP method is limited in its application to much larger
circuits because of its exponential complexity. Second, An absolute minimal N-detect test
set can be guaranteed only when we start with the exhaustive set of inputs. Because that
is practically impossible for large circuits, the quality of the result will depend upon the
starting set. We classify circuits based on the overlap between the input cones of their
primary outputs. For type-I circuits, the cones have a large overlap and the initial test
set should have many (if not all) vectors for each fault. For type-II circuits, the cones are
largely disjoint and vectors should be generated with don?t cares and then combined in a
specific way. The remaining contributions as summarized below are motivated by these
observations.
Anewrelaxed linear programming (LP) algorithm addresses the complexity issue. We
call this method recursive rounding. It is shown to be an improvement over the existing
relaxed LP methods like randomized rounding. The new algorithm has a polynomial time
complexity and, most importantly, it is time bound. The worst case time will be the product
of the minimized test set size and the time taken for one LP run. We, however, observed
vi
that the time taken by the algorithm in practice is significantly smaller than the worst case
time. The recursive rounding algorithm is applied to the ISCAS85 benchmark circuits. For
the circuit c432 targeted for 15-detect, the ILP method produced a test set of size 430 in
444.8 seconds while the recursive rounding algorithm took only 83.5 seconds to generate a
test set of the same size. The results indicate an almost linear increase in CPU time with
respect to the circuit size. The recursive rounding method can be applicable in various
fields where ILP is conventionally used.
In general, a circuit may not exactly be type I or II, but can only be classified as being
closer to one. As examples, ripple-carry adders and c432 are studied in Chapter 4 and
Chapter 5 respectively. The benchmark circuit c6288, a 16-bit multiplier, is a challenging
example due to the huge di?erence between its theoretical minimum test set of size of six
and the practically achieved minimum test set of size 12. Small size multipliers are designed
based on c6288 and integer linear programming method is applied to their exhaustive test
sets. Some interesting observations are made in this work. For the first time, it is veritably
confirmed through a 5-bit multiplier that for some circuits, the theoretical lower bound
cannot be achieved practically. The 5-bit multiplier has a theoretical minimum of size
six, but it needs at least 7 vectors to detect all its faults once. It is also observed that
even though the practical minimum for a single-detect test set is seven, only 12 vectors
are needed to detect each fault twice. The minimum test sets from smaller multipliers are
replicated in various combinations for the 16 bit multiplier (c6288) and are processed using
ILP, which produced a minimum test set of size 10. The recursive LP method gave a test
set of 12 vectors in 301 seconds. We believe that the 10 vector set is the smallest test set
ever achieved for c6288.
vii
Acknowledgments
I would like to thank my advisor Dr. Vishwani D. Agrawal for his sustenance and
being my constant source of motivation and inspiration. I thank Dr. Charles E. Stroud and
Dr. Victor P. Nelson for being on my thesis committee and for their valuable suggestions.
Suggestions regarding relaxed LP from Dr. Shiwen Mao are quite helpful. I thank all my
professors in Auburn University for their guidance and support, its been a privilege learning
from them. I would like to thank my parents and sister for their continuous support and
encouragement. I thank my research mates and friends in Auburn for their assistance and
company.
viii
Style manual or journal used L
A
T
E
X: A Document Preparation System by Leslie
Lamport (together with the style known as ?aums?).
Computer software used The document preparation package T
E
X (specifically L
A
T
E
X)
together with the departmental style-file aums.sty. The images and plots were generated
using Microsoft O?ce Visio 2007 beta/SmartDraw 6 and Microsoft O?ce Excel 2003.
ix
Table of Contents
List of Figures xiii
List of Tables xiv
1 Introduction 1
1.1 ProblemStatement................................ 1
1.2 ContributionofResearch ............................ 2
1.3 OrganizationoftheThesis............................ 3
2 Background 4
2.1 FaultModeling.................................. 5
2.1.1 Stuck-AtFaults.............................. 5
2.1.2 BridgingFaults.............................. 5
2.1.3 Transistor-LevelFaults.......................... 7
2.1.4 DelayFaults ............................... 8
2.2 NeedforHigherDefectCoverage ........................ 8
2.3 N-DetectTests .................................. 9
2.4 TestMetrics?AnalyzingtheE?ciencyofTestSets ............. 9
2.4.1 BridgingCoverageEstimate(BCE)................... 9
2.4.2 NeighborhoodNodeStates ....................... 10
2.4.3 GateExhaustive(GE)Coverage .................... 10
2.5 NeedforTestMinimization ........................... 11
3 Previous Work 13
3.1 PreviousWorkonTheoreticalLimitsofTestSets............... 13
3.2 PreviousWorkonTestMinimization...................... 14
3.2.1 StaticCompaction............................ 14
3.2.2 DynamicCompaction .......................... 15
3.3 Generation of N-DetectTests .......................... 18
3.3.1 Traditional N-DetectATPGMethod.................. 18
3.3.2 Defect Oriented N-DetectGreedyATPGApproach[64] ....... 19
3.3.3 N-DetectGenerationFromOne-DetectionTestSet[79] ....... 19
3.4 N-DetectTestMinimization........................... 20
3.5 LinearProgrammingTechniquesforSingle-Detection............. 20
3.5.1 IntegerLinearProgramming ...................... 21
x
3.5.2 RelaxedLinearProgramming...................... 22
3.6 ApplicationsofLPTechniquesinTesting ................... 24
4 Theoretical Results on Minimal Test Sets 25
4.1 IndependenceGraph............................... 25
4.2 IndependentFaultSet .............................. 26
4.2.1 LowerBoundonSingle-DetectTests.................. 27
4.2.2 Lower Bound on N-DetectTests .................... 28
4.2.3 AnExample ............................... 29
4.3 ClassificationofCombinationalCircuits .................... 31
4.3.1 Type-ICircuits.............................. 31
4.3.2 Type-IICircuits ............................. 32
4.3.3 Ripple-CarryAdders........................... 33
5ILPMethodforN-Detect Tests 36
5.1 TestSetMinimizationProblemasaSetCoveringProblem.......... 36
5.2 RealizationusingIntegerLinearProgramming ................ 37
5.2.1 Example.................................. 38
5.3 Derivation of N-DetectTests .......................... 39
5.3.1 Example.................................. 42
5.4 Results....................................... 44
6 Recursive Rounding ? A New Relaxed-LP Method 47
6.1 ComplexityofIntegerLinearProgramming .................. 47
6.2 LP-relaxationoftheMinimizationProblem .................. 48
6.3 Limitations of Randomized Rounding . .................... 50
6.4 Recursive Rounding . .............................. 51
6.4.1 Recursive Rounding Procedure . .................... 51
6.4.2 The 3V3F Example for Recursive Rounding . ............. 52
6.4.3 Analyzing the Recursive Rounding method . ............. 53
6.5 Results....................................... 54
6.5.1 Single-DetectTests............................ 54
6.5.2 N-DetectTests .............................. 57
6.6 ANoteonRelaxed-LPMethods ........................ 59
7 Single Detect Results of c6288 Benchmark 60
7.1 Structure of c6288 Benchmark . . . . . . .................... 60
7.2 IterativeArrays.................................. 60
7.3 Approach for c6288 Benchmark . . . . . .................... 61
8Conclusion 65
8.1 FutureWork ................................... 67
8.1.1 TheDualProblem............................ 67
xi
Bibliography 69
Appendix 75
xii
List of Figures
2.1 Basicprincipleofdigitaltesting. ........................ 4
2.2 Stuck-atfaultsrepresentingactualdefects................... 6
2.3 Node-to-nodebridgingfaultsinaCMOScircuit. ............... 7
4.1 Independencegraphofc17. ........................... 26
4.2 Independencegraphofc17showinganindependentfaultset. ........ 27
4.3 Faultsintheindependentfaultsetofc17.................... 29
4.4 Type-Icircuit. .................................. 31
4.5 Type-IIcircuit................................... 32
4.6 Hierarchicalstructureofripple-carryadder................... 33
4.7 Minimizedtestsetsofripple-carryadders.................... 34
4.8 Structureofthefull-adderusedinripple-carryadders............. 35
5.1 Sizes of N-detecttestsetsforc432asafunctionofiterations......... 41
5.2 ILP CPU time versus number of unoptimized vectors for c432. . ...... 42
5.3 Allthe22structuralequivalencecollapsedfaultsofc17. ........... 43
5.4 Thestructuralequivalencecollapsedfaultsofc17(numbered)......... 44
6.1 ILP and LP solutions for the three-vector three-fault (3V3F) example. . . . 49
6.2 LP solution space and the progression of recursive rounding . . . ...... 52
6.3 Quality and Complexity of recursive LP and ILP solutions for multipliers. . 56
7.1 Structure of an n-bitmultiplier.......................... 61
7.2 Structure of an n-bitripple-carryadder..................... 62
xiii
List of Tables
4.1 Results of a 4-bit ALU (74181). . . . . . .................... 30
5.1 N-detect tests for 74181 ALU. . . . . . .................... 39
5.2 Diagnosticfaultsimulationresultforthe29vectorsofc17. ......... 45
5.3 N-detecttestsetsizesminimizedbyILP. ................... 46
5.4 Comparing15-detecttests. ........................... 46
6.1 Optimizedsingle-detecttestsforISCAS85circuits(*incomplete)....... 54
6.2 Single-detecttestoptimizationformultipliers.................. 55
6.3 Optimized5-detecttestsforISCAS85circuits. ................ 57
6.4 Optimizedsizesof15-detecttestsforISCAS85benchmarkcircuits...... 57
7.1 Practicalsingle-detecttestsizesformultipliers................. 63
7.2 Asingle-detecttestsetfor5-bitmultiplier. .................. 63
7.3 Twosingle-detecttestsetsfor4-bitmultiplier. ................ 64
7.4 Asingle-detecttestsetfor6-bitmultiplier. .................. 64
7.5 Ten-vector single-detect test set generated for c6288. ............. 64
xiv
Chapter 1
Introduction
All manufactured VLSI chips are tested for defects. But it is not possible to generate
or apply vectors to test all possible defects in a chip. So defects are modeled as faults
to ease the test generation process. Among the various fault models proposed, the single
stuck-at fault model is widely accepted because of its closeness to actual defects and the
algorithmic possibilities it o?ers for generating test vectors. However, as smaller DPM
(defective parts per million) levels are desired for devices in most applications, better fault
models are needed, which can accurately model the defects. Such fault models tend to be
complex, making test generation harder, or even impossible. Therefore, a practical idea
that seems to work is to use the single stuck-at fault model and increase the probability of
detecting unmodeled defects by increasing the number of times each single stuck-at fault is
detected during a test [70].
An N-detect test set is a set that detects each stuck-at fault with at least N ?di?erent?
test vectors. The more uniquely di?erent the test vectors for a fault, the better may be
the defect coverage [7, 38, 94], but harder will be the test generation. There is no general
agreement on how the vector ?di?erence? should be defined. The main problem that limits
the use of N-detect tests is their size, and there is need to minimize them.
1.1 Problem Statement
The main problems solved in this thesis are:
1
1. Find a lower bound on the size of N-detect tests.
2. Find an exact method for minimizing a given N-detect test set.
3. Derive a polynomial time heuristic algorithm for the problem in item 2.
1.2 Contribution of Research
We find a theoretical lower bound on the size of N-detect tests, which is N times the
size of the theoretical minimum of a single-detect test set. The result is significant in the
sense that even though a single-detect test detects about 70?80% of the faults at least twice,
we still need a test set that is almost twice its size to detect all the faults twice. The ability
to obtain minimized test sets for combinational circuits is studied based on their structure.
It is observed that it is harder to obtain a minimized test set for shallow circuits with
narrow non-overlapping output cones than to obtain a minimized test set for deep circuits
with output cones having large overlap.
Diagnostic fault simulation (fault simulation without fault dropping) information is
used to convert the test set minimization problem into a set covering problem. Linear pro-
gramming techniques are used to solve the problem. Initially, Integer Linear Programming
(ILP) is used. ILP always produces an optimal solution, but its worst-case time complexity
is exponential. So a new Linear Programming (LP) based recursive rounding method is
developed to solve the problem in polynomial time.
The ILP produced better results compared to a previously reported method [64]. The
new LP-recursive rounding method produced test sets that are almost of the same size as
those produced by ILP, but the solution is much quicker.
Another endeavor undertaken is to produce a minimal single-detect test set of ten
vectors for the c6288 benchmark by making use of the regularity in its structure and the
2
linear programming techniques. The ten vector set is the lowest ever achieved for the c6288
benchmark circuit.
A paper describing part of this work was presented at the Nineteenth International
Conference on VLSI Design [53] and another paper describing the work on the new LP-
recursive rounding method is being presented at the Twentieth International Conference on
VLSI Design [54].
1.3 Organization of the Thesis
The thesis is organized as follows. In Chapter 2, we discuss the basics of digital testing
and the N-detect tests. In Chapter 3, previous test minimization strategies along with the
linear programming techniques are discussed. In Chapter 4, the theoretical results on the
minimal tests are presented. The ILP method modified for N-detect tests is introduced
in Chapter 5, which also includes results on ISCAS85 benchmark circuits. The new LP-
recursive rounding method is given in Chapter 6 along with its results. The single-detect
results obtained for c6288 benchmark are reported in Chapter 7. The thesis is concluded
with an insight on future work in Chapter 8.
3
Chapter 2
Background
Any manufactured integrated circuit is prone to defects. Proper design, verification
and physical device debugging can get rid of most of the systematic defects. Any remaining
defects are random by nature and cannot be completely eliminated. Therefore, testing needs
to be done on every integrated circuit (IC) device that is manufactured so that defective
devices can be separated. Testing a digital IC involves applying specific bit vectors to the
circuit under test and comparing the observed output responses with expected responses.
Figure 2.1 illustrates the basic principle of digital testing [13].
DIGITAL CIRCUIT
COMPARATOR
---11
---01
--- - -
--- - -
--- - -
---00
---10
10---
00---
- - ---
- - ---
- - ---
- - ---
11---
00---
STORED
CORRECT
RESPONSE
OUTPUT RESPONSESINPUT PATTERNS
TEST RESULT
Figure 2.1: Basic principle of digital testing.
4
2.1 Fault Modeling
As it is not possible to enumerate all possible physical defects and develop tests for
them, the defects are modeled as faults. These abstract fault models emulate the behav-
ior of physical defects while simplifying the test generation process. Of the various fault
models, the single line stuck-at fault model is widely accepted because of its closeness to
actual defects and the simplicity it allows in generating test vectors. Various e?cient algo-
rithms have been developed and programmed to e?ciently generate tests for single stuck-at
faults [35, 47, 82, 87, 86].
2.1.1 Stuck-At Faults
Stuck-at faults are gate-level faults modeled by assigning a fixed (0 or 1) value to an
input or an output of a logic gate or a flip-flop [13]. Each interconnection in a circuit
can have two faults, stuck-at-1 and stuck-at-0, represented as sa1 and sa0, respectively. If
we assume that there can be several simultaneous stuck-at faults, then in a circuit with n
interconnections between gates there are 3
n
? 1 possible multiple stuck-at faults. Besides
their prohibitively large number, the multiple stuck-at faults pose other problems, like fault
masking. It is a common practice to model only single stuck-at faults. We therefore assume
that only one stuck-at fault is present in a circuit. In a circuit with n interconnections there
can be no more than 2n single stuck-at faults. These can be further reduced using fault
collapsing techniques [13, 85].
2.1.2 Bridging Faults
Bridging fault is a model in which defects are modeled as bridges between two nodes of
a circuit. Bridging fault models give an accurate replication of the actual defects and they
5
VCC
1
0
W
VSS
W sa1
W sa0
Figure 2.2: Stuck-at faults representing actual defects
are called ?defect-oriented faults? [3, 83]. Bridging faults can be modeled at the gate or
transistor level. The bridging faults modeled for defects in a CMOS circuit can be broadly
classified as dominant, OR-type and AND-type bridging faults (see Figure 2.3).
Generating test vectors for bridging faults can be a complex problem [9]. In a dominant
bridging fault a node K forces its logic value on another node W. This type of bridging fault
will be detected if a stuck-at fault on W is detected and K and W have opposite values. To
detect an OR-type bridging fault where W behaves as (W OR K), a stuck-at-1 fault on wire
W should be detected while K is set to 1. Similarly, to detect an AND-type bridging fault
where W behaves as (W AND K), a stuck-at-0 fault on wire W should be detected while
K is set to 0. Clearly, the probability of detecting a bridging fault by a stuck-at fault test
depends on the chance of that test having the right value on the involved wire [9]. This point
is further analyzed later in this chapter while explaining test metrics. It has been observed,
for a state of the art microprocessor design, that approximately 80% of all bridges occur
6
W
K = 1
K = 0
K = 1
K = 0
Dominate
OR
AND
Figure 2.3: Node-to-node bridging faults in a CMOS circuit.
between a node and V
cc
or V
ss
[60]. In another evaluation it was concluded that bridges
with power rails contributed to about 60% to 90% of all bridging defects [15]. As observed
from Figure 2.2 these node-power rail bridging defects can be directly modeled at the gate
level as stuck-at faults, which is the reason why the stuck-at fault tests demonstrate such a
good defect coverage.
2.1.3 Transistor-Level Faults
At the transistor-level, defects in a CMOS circuit are modeled as stuck-short and
stuck-open faults on transistors. Like the single stuck-at fault model, this fault model
assumes just one transistor to be stuck-open or stuck-short. A vector pair is needed to
detect a stuck-open fault. First, an initialization vector appropriately sets a logic value at
the output of the gate with the stuck-open transistor and then another vector, generated
for a corresponding stuck-at fault is applied to detect the stuck-open fault. A stuck-open
fault of a pMOS transistor can be modeled as a stuck-at-1 fault at the gate input of that
transistor. Similarly, a stuck-open fault of an nMOS transistor is equivalent to a stuck-at-0
7
fault at the gate input of that transistor. A stuck-short fault of a pMOS or nMOS transistor
can be represented as a stuck-at-0 or stuck-at-1 fault at the corresponding gate input [13].
These faults are therefore internal to logic gates because in a CMOS logic gate every input
internally fans out to multiple transistors. The only way to accurately detect a stuck-short
fault is through I
DDQ
testing, which is not considered practical due to high leakage and
large transistor densities of the present day digital ICs.
2.1.4 Delay Faults
All paths in a circuit have some nominal delays. But due to manufacturing defects,
some paths may get further delayed. If the delay of a path prevents a transition from
reaching an output within a clock period, it will cause an error. These delay defects are
modeled as path-delay, gate-delay, and transition faults [13, 21, 61]. Simplest of these are
transition faults, defined as slow-to-rise and slow-to-fall conditions for each gate.
2.2 Need for Higher Defect Coverage
The test vectors generated based on the single stuck-at fault model can deliver a product
quality level of around 500 DPM (defective parts per million) [3, 70]. Some applications
require much lower DPM. The defect coverage must be improved either by using better
fault models to generate tests or by generating enhanced tests based on the single stuck-at
fault model. Better fault models like the bridging faults, transistor faults and delay faults
make test generation harder, and sometimes impossible. A practical alternative is to make
use of the single stuck-at fault model to generate better test vectors, which can improve the
defect coverage.
8
One solution is to reorder and/or reapply the same single-detect test set. This ensures
increased transition fault coverage and stuck-open fault coverage. But the bridging fault
coverage cannot be improved unless those specific test vectors are generated. It was observed
that a test set with greater than 95% stuck-at fault coverage can produce only 33% coverage
of node-to-node bridging faults [60]. So a suggested alternative is to target each single stuck-
at fault multiple times [70, 77].
2.3 N-Detect Tests
N-detect tests are stuck-at fault tests where each stuck-at fault is detected at least
N times, i.e., by N di?erent test vectors. It is observed that the N-detect tests give a
much improved defect coverage (lower DPM) than single-detect stuck-at fault tests [7, 65,
69, 70, 93, 94]. Detecting the same single stuck-at fault under di?erent excitation and
observation conditions improves the chances of detecting di?erent bridging faults. Various
test generation strategies have been developed to generate e?cient N-detect tests [7, 9, 45,
64, 72, 78, 79, 94].
2.4 Test Metrics ? Analyzing the E?ciency of Test Sets
Various test metrics have been developed to analyze the e?ciency of the generated test
sets.
2.4.1 Bridging Coverage Estimate (BCE)
If a stuck-at fault on a node is detected once, the probability of detecting a static
bridging fault with another un correlated node, that has a signal probability 50% of being
1 or 0, is also 50% [38]. Similarly, if a stuck-at fault is detected N times, the chance to
9
detect its corresponding bridging fault will be (1?(1?0.5)
N
) [9]. So for a given test set T
and target fault set F, the bridging coverage estimate (BCE) is calculated as follows:
BCE =
n
summationdisplay
i=1
f
i
|F|
(1 ? 2
?i
) (2.1)
where f
i
is the number of stuck-at faults detected i times by T, |F| is the total number of
stuck-at faults in the target fault list F and n is the maximum number of times that any
fault is detected by T. Generally n can be set to 10, which yields an upper bound BCE of
99.9%, to su?ciently improve the quality of a test set.
2.4.2 Neighborhood Node States
The BCE estimates the e?ectiveness of a test set, based on the number of times each
fault is detected. But the probability of detecting a defect at a fault site will increase
with the number of detections only when the fault at that site is detected under di?erent
excitation and observation conditions [38]. So, neighboring nodes of a fault site are used to
evaluate the diversity of the test. If di?erent states of neighboring nodes are excited during
the multiple detections of a fault, then the probability to detect di?erent types of defects
at that fault site increases [7, 94]. Neighborhood signals are extracted at the logic level. A
more realistic measure is obtained if we extract the physical neighborhoods of signals from
the layout [10, 72]. But such measures are complex and time consuming.
2.4.3 Gate Exhaustive (GE) Coverage
Several test metrics have been proposed for analyzing the e?ciency of test sets. Some
of them include observation count [38], MPG-D defective part level [25, 26, 27, 28, 64], etc.
The most recent one is a test metric called gate exhaustive (GE) coverage [18, 39, 68]. The
10
GE is computed as:
GE =
|F|
summationdisplay
i=1
coverage credit(f
i
)
|F|
(2.2)
where |F| is the total number of gate-output stuck-at faults in the circuit and for each
stuck-at fault f
i
, the coverage credit of f
i
is the ratio of the number of distinct gate input
combinations in the test set that detect f
i
to the maximum number of gate input combi-
nations that can detect f
i
. The GE coverage estimate, though computationally complex, is
observed to be closely correlated to the actual defect coverage [39]. The only way to exactly
measure the e?ectiveness of a test set is to observe the actual chip defect coverage.
2.5 Need for Test Minimization
Better test sets are expected to give better defect coverage. An important aspect
which limits our ability to apply better test sets is the test application time. As chip
testing costs are becoming a major portion of the manufacturing costs, any reduction in
test application time directly translates into reduction of the chip cost. An ideal way to
reduce the test application time is by reducing the number of vectors to be applied. Thus,
test set minimization is an important aspect in testing.
Test generation is known to be a computationally complex problem [32, 46]. Even test
minimization is proven to be NP-complete [34, 59]. So, heuristic methods [17, 41, 76] are
used. These heuristic methods are based on a set of iterative test set compaction algorithms,
whose execution time can be reduced by stopping the iteration whenever some prespecified
threshold is reached or after iterating a predetermined number of times. Theoretical lower
bounds, sometimes known, give an estimate of the e?ectiveness of a compaction procedure.
Estimating the size of, or determining the minimum test set is given considerable importance
11
inresearch[2,4,5,6,22,23,41,59,67].Theminimumtestsetsizeestimationproblemis
NP-hard [59].
12
Chapter 3
Previous Work
In this chapter, previous work on the estimation of theoretical limits of single-detect
test sets and the previous work on test compaction is presented. The last part of the
chapter reviews the work on linear programming techniques that are relevant to the present
research.
3.1 Previous Work on Theoretical Limits of Test Sets
A vast amount of work has been done on finding the theoretical minimum for single-
detecttestsforcircuits [2,4,5,6,22,23,41,59,67]. Oneofthemostcommonlyused
heuristics for finding the lower bound is to find the size of the maximal clique of the indepen-
dence graph (also known as the incompatibility graph) of the circuit. In an independence
graph, each node represents a fault and an edge between a pair of nodes indicates inde-
pendence of the corresponding faults. Two faults are independent of each other, if they
cannot be simultaneously detected by any vector. A clique is a sub-graph, where each node
is connected to every other node in the sub-graph [19]. It has been shown that the maximal
clique in the independence graph, called the maximal independent fault set, will be less
than or equal to the minimum test set size [5].
13
3.2 Previous Work on Test Minimization
Minimizing test sets, simply termed as test set compaction, can be broadly classified
as static or dynamic compaction [1, 36].
3.2.1 Static Compaction
The process in which all tests are generated first before carrying out compaction is
referred to as static compaction. Two important methods sum up the idea of static com-
paction, vector shu?ing and vector merging.
In vector shu?ing, the generated vectors are shu?ed and then simulated in the new
sequence order. In fault simulation, after the circuit is simulated for a vector, all the faults
detected by that vector are dropped from the target fault list. So, if the fault set is empty,
and still there are some vectors left for simulation, then those vectors can be removed from
the test set. Also, if a simulated vector does not detect any of the remaining faults in the
target fault list, then that vector can be considered redundant and is dropped.
A much simplified ?reverse order fault simulation? method [87] has also been used,
where the generated test set is simulated in the reverse order, hoping to eliminate some
of the initially generated vectors. Though this method looks too simple to be e?cient, it
is e?ective to a certain level of compaction. This is because the normal test generation
often starts with a random phase, where random vectors are simulated until no new faults
are being detected and then a deterministic phase begins, where fault specific vectors are
generated. Thus, test pattern generation e?ort is not wasted on the easy to detect faults
that have already been detected by random vectors. A reverse order fault simulation method
then removes most of the initially generated random vectors, which are made redundant by
the deterministic vectors.
14
In vector merging, when a vector targeting a fault is generated, the unspecified bits
are left as such. Then these unspecified bits are used to merge vectors [1, 13].
An ingenious method, named the ?double detection algorithm?, was developed by
Kajihara et al., [50, 51], where the generated vectors are simulated in any order without
fault dropping and for each vector the number of faults only detected by that vector alone
(exclusively detected) is counted. At the end of the simulation if a vector has a count 0 then
it means that all the faults it detects are also detected by other vectors. But this does not
mean that the vector is redundant, because it may belong to a set of vectors, each of which
detects a fault. Now in a second simulation with fault dropping, all vectors with non-zero
counts are simulated first, followed by the vectors with zero counts in the reverse order.
This time many vectors with 0 counts become redundant, i.e., those that do not detect any
new fault, and can be eliminated.
3.2.2 Dynamic Compaction
The compaction procedure which makes use of the fault simulation information during
test generation and/or which dynamically morphs and merges the generated vectors based
on the fault simulation information can be simply termed as dynamic compaction. Basically,
as the test generation proceeds, fault targets are dynamically selected and test vectors
for target faults are also selected based on certain criteria. Besides, a vector previously
generated may be modified or replaced.
Dynamic Compaction During Test Generation
The first dynamic compaction procedure proposed by Goel and Rosales [36] makes use
of the fault simulation information by dynamically adjusting the fault list and dropping the
faults which are detected by the vectors generated earlier.
15
Most of the latest dynamic compaction techniques are based on the idea of compatibility
of faults. Two faults are said to be compatible, if there is at least one vector that can detect
both faults. Two faults are independent (incompatible), if they cannot be detected by a
single vector. Recent work on independence fault collapsing and concurrent test generation
collapses faults into groups based on an independence relation [23], and then for each group
tries to generate either a single vector, or as few vectors as possible, which can detect all
faults in that group [2, 22].
In COMPACTEST [76], independent fault sets were used to order the fault list for
test generation. The faults were ordered such that faults included in a larger independent
fault set (IFS) appear higher in the fault list and are processed earlier. Thus, if the largest
independent fault set is of size k, then the first k tests are generated for independent faults.
Thus, it is guaranteed that the first k tests are necessary and cannot be replaced by a
smaller number of tests. Later Kajihara et al. [50, 51] proposed dynamic fault ordering,
in which, whenever a test vector is generated, the order of the faults in the fault list is
changed according to the number of yet-undetected faults in each independent fault set.
The largest set at that point is dynamically put at the top of the fault list. The ?double
detection algorithm? mentioned earlier in the section on static compaction can be applied
dynamically, where fault simulation and value assignment was done for each vector right
after its generation. This made test compaction far more e?cient than the corresponding
static compaction technique.
Ayari and Kaminska [8] proposed test generation approaches that make use of the
compatibility relation between faults to decide which of the faults can be targeted together
for test generation. The idea of concurrent test generation has also been proposed by Doshi
and Agrawal [2, 22].
16
A redundant vector elimination (RVE) algorithm, which identifies redundant vectors
during test generation and dynamically drops them from the test set, was developed by
Hamzaoglu and Patel [41]. The proposed RVE algorithm works somewhat similar to the
double detection algorithm [50, 51], but in addition to the count of faults ?only detectable
by this vector?, all faults detected by each vector are also determined. In addition, the exact
number of times each fault is detected by the vector set is determined. As will be observed,
this information is needed for other advanced dynamic test compaction techniques [17, 41,
76].
Dynamic Compaction After Test Generation
Chang and Lin [17] proposed a forced pair-merging (FPM) algorithm in which a vector,
say t
1
, is taken and as many 0s and 1s as possible are replaced with Xs such that the vector
still detects its essential faults. The essential faults of a vector are those that cannot be
detected by any other vector in the vector set. Then, for each of the remaining patterns,
say t
2
, the algorithm tries to change the bits that are incompatible with the new t
1
to Xs.
If the process succeeds, the pair is merged into a single pattern. The essential fault pruning
(EFP) method developed by Chang and Lin [17] tries to actively modify the rest of the
test set to detect the essential faults of a vector, making the vector redundant. Pruning
an essential fault of a test vector decreases the number of its essential faults by one. If all
the essential faults of a test vector are pruned then it becomes redundant. A limitation of
this method is its narrow view of the problem; it is not possible to remove all the essential
faults of a vector. So a new essential fault reduction (EFR) algorithm [41] is used instead of
the EFP method. EFR uses a multiple target test generation (MTTG) procedure [17, 51]
to generate a test vector that will detect a given set of faults. The EFR algorithm makes
17
use of the independence (incompatibility) graph to decide which faults in a vector can be
pruned through detection by another vector.
The main problem with all dynamic test compaction approaches is their complex-
ity. Also, all methods heavily rely on fault simulation for every step. So applying these
approaches could become impractical for minimizing N-detect tests. The essential fault
reduction (EFR) algorithm along with the redundant vector elimination (RVE) algorithm
have produced some of the smallest reported single-detect test sets for the ISCAS85 [12]
and ISCAS89 [11] (scan versions) benchmark circuits.
3.3 Generation of N-Detect Tests
3.3.1 Traditional N-Detect ATPG Method
N-detect tests have been generated in industry by a simple algorithm. The algorithm
proposed by Benware et al. [9] is illustrated below:
1. Perform single-detect fault simulation with the single-detect pattern set T
1
for all
faults
2. Save all faults detected by single-detect fault simulation with pattern set T
1
in list F
3. Set the number of detections to N
4. For k =1to(N ? 1)
(a) Perform multiple-detect fault simulation with pattern sets T
1
to T
k
for faults in
list F
(b) Save faults detected k times in F
k
18
(c) Target faults in F
k
and perform single-detect ATPG to increase the number of
detections by one
(d) Save the patterns to T
k+1
This method has a built-in static compaction procedure and it is perhaps the most
frequently used method in the industry [7, 9, 45, 94].
3.3.2 Defect Oriented N-Detect Greedy ATPG Approach [64]
In ATPG algorithms e?ciency of searching is essential for reducing the run time. There-
fore, an ATPG always chooses what are estimated to be the easiest excitation and propaga-
tion paths. If the same fault is targeted again, the ATPG may get exactly the same vector,
which will not detect any additional defects. Thus, this e?ciency-oriented approach may
not always produce the best defect coverage. So, a new ATPG procedure is proposed, which
will try to randomize the searching process for both excitation and propagation. As a result,
di?erent patterns will be generated in each run and it can be expected that multiple pat-
terns for the same fault site would detect additional defects fortuitously. Such approaches
may also try to achieve compaction by targeting multiple faults during test generation.
3.3.3 N-Detect Generation From One-Detection Test Set [79]
In this procedure, N-detect tests are generated without applying a test generation
procedure to target faults. Starting from a single-detect test set the algorithm applies an
N-detect fault simulation to the test set. Unlike normal fault simulation, where faults are
dropped right when they are detected, in an N-detect fault simulation faults are dropped
only when they have been detected N times. This simulation gives the number of times
each fault still needs to be targeted. Test cubes are generated for the targeted faults from
19
the one-detection test set. Test cubes are vectors in which 0s and 1s have been converted
to Xs, but they still detect particular targeted faults. Next, those test cubes are merged
in various ways to obtain an N-detect test set. It is observed that the resulting test set is
as e?ective as an N-detect test set generated by a deterministic test generation procedure
in detecting untargeted faults. As the fault simulation of a given test set is faster than
its generation, this approach is reported to perform faster than the traditional N-detect
approach. Fault simulation is required only for extracting test cubes for the targeted faults.
3.4 N-Detect Test Minimization
Although several test generation strategies have been developed, not much work has
been done on minimizing the size of N-detect tests. A previously reported N-detect test
minimization approach is the reverse order N-detect fault simulation [79], where vectors
are simulated in the reverse order and a fault is dropped only after it is detected N times.
If a test vector does not detect any fault that is still in the fault list, then that vector is
removed from the test set. The only other N-detect test minimization approach presented
in the literature makes use of polynomial time reduction techniques followed by a greedy
approach to minimize a pre-generated N-detect test set [45]. That is presented in the next
section.
3.5 Linear Programming Techniques for Single-Detection
The problem of finding a smallest possible subset of test vectors that tests a given
set of faults is a set covering problem. In set covering problems, the goal is to identify
the smallest collection of sets that cover a given (universal) set of elements. In the test
20
compaction problem the elements are the faults to be covered by the given collection of test
vectors.
3.5.1 Integer Linear Programming
Suppose a circuit has the detected faults f
1
,...,f
m
and the corresponding test set
consisting of vectors v
1
,...,v
n
. Each test vector v
j
is associated with a fault subset, S
j
,j=
1,...,n. The problem is to find the smallest collection of fault subsets that covers all faults
1,...,m.Anm?n detection matrix D of m rows and n columns is generated. Its element
D
ij
equals 1 if fault f
i
is detected by vector v
j
,otherwiseD
ij
equals 0. We also define a
vector of n integer variables x such that the variable x
j
= 1 implies that vector v
j
belongs to
the compacted vector set and v
j
=0meansthatv
j
is discarded. The set covering problem
is formulated as an integer linear programming (ILP) model:
Objective function : minimize
n
summationdisplay
j=1
x
j
subject to m constraints : Dx ? 1 (3.1)
A solution of the ILP model can be obtained from any available program [31]. The mini-
mized value of the objective function gives the number of vectors in the compacted test set.
The values of [0,1] integer variables x
j
directly determine which vectors should be in the
minimal vector set. Even for moderately large circuits, ILP solutions can be obtained.
However, the worst-case complexity of ILP is exponential and some solutions will require
large run times.
21
3.5.2 Relaxed Linear Programming
A useful method to solve an integer linear programming (ILP) problem is to first solve
a relaxed linear programming (LP) problem, which has the same objective function and
constraints as given in Equation 3.1, but the variables x
j
are regarded as real continuous
variables in the range [0.0,1.0]. Since the LP complexity is polynomial, a solution is easily
obtained from available programs [31].
It is known that the minimized value of the objective function in the relaxed LP solution
is a lower bound on the ILP solution. The value of x
j
, however, does not directly indicate
whether or not vector v
j
should be selected. The method discussed in the next section rounds
each x
j
to 0 or 1 such that the resulting values satisfy all ILP constraints of Equation 3.1. A
solution to the vector compaction problem is thus obtained, altough the optimality provided
by the ILP is not guaranteed.
Randomized Rounding
The process of rounding the real values obtained from the relaxed LP is probabilistic.
The variable x
j
is interpreted as the probability of including the vector v
j
in the compacted
vector set. However, to find the vector set, we should round each x
j
to 0 or 1. This
procedure is known as randomized rounding [92]:
1. Generate a real-valued random number r in the range 0.0 and 1.0
2. Compare r and x
j
(a) If r ? x
j
,thenx
j
is set to 1,
(b) otherwise, x
j
is set to 0.
22
If the final rounded solution satisfies the ILP constraints in Equation 3.1, it will be the final
relaxed-LP solution. If the rounded values do not satisfy the ILP constraints 3.1, and this
can happen quite frequently, then one must repeat the rounding procedure.
In some cases, where the randomized rounding cannot give an integer solution, a round-
ing heuristic proposed by Hochbaum [44] can be used. If the solution of relaxed-LP is x
?
,
then the output of the rounding heuristic is ceilingleftx
?
ceilingright. In this rounding approach, all the vari-
ables which are assigned real values greater than zero are immediately rounded to 1. Thus,
all ILP constraints 3.1 are guaranteed to be satisfied, although, in general, the solution will
be non-optimal, sometimes with a significant margin.
Motivated by the weaknesses of solutions obtained from the relaxed-LP using the pre-
viously known methods, we have developed a new procedure called recursive rounding in
the present research. That work is discussed in Chapter 6.
A Heuristic Method [45]
As the ILP method cannot be applied to large circuits due to space and performance
limitations, a heuristic method is proposed, which first applies an extension of the com-
monly used polynomial-time reduction techniques in logic synthesis. The techniques of
essentiality, row dominance and column dominance are redefined for this approach. These
techniques along with a greedy algorithm are applied to approximately solve the integer
linear programming problem in polynomial time.
23
3.6 Applications of LP Techniques in Testing
The linear programming techniques are used in various aspects of testing. They have
been used to optimize vector sequences for sequential circuits [24], minimize tests for re-
versible circuits [75], compact defect-oriented test patterns [92] and optimize test patterns
generated from RTL descriptions of circuits [95, 96, 97]. Other applications for the LP
techniques include test planning [91], test data compression [16] and test resource optimiza-
tion [14, 49] for testing of SoCs. However, the idea of using linear programming techniques
for N-detect tests was first explored in our recent paper [53]. This is one of the main
contributions of the present research and will be discussed in Chapter 5.
24
Chapter 4
Theoretical Results on Minimal Test Sets
In this chapter, a theoretical minimum for N-detect tests is derived based on the
independence graph. As test minimization is an NP-complete problem [34, 59], various
heuristic-based strategies are used for test minimization. For those methods to work e?-
ciently, a rough idea of the lower bound is needed to make their search precise. Also, in
this chapter we study the quality of minimized test sets when an exact minimization is
possible. This quality is influenced by the initial test set generated before minimization.
This analysis helps us understand why test sets for some circuits can be better compacted
compared to others.
4.1 Independence Graph
Two faults are independent (also called incompatible) if and only if they cannot be
detected by the same test vector [5, 6]. An independence graph (also known as incompatibility
graph) represents the independence relations between the faults of a circuit. Each fault is
represented by a node in the independence graph and the independence of two faults is
represented by the presence of an undirected edge between the corresponding nodes. So an
edge between two nodes means that the two faults cannot be tested by a common vector.
The Independence graph for the 11 collapsed faults for the ISCAS85 benchmark circuit
c17 is shown in Figure 4.1 [22]. The circuit c17 has 17 nets, so there will be a total of
25
4321 5
11
10
9
876
Figure 4.1: Independence graph of c17.
34 single-line stuck-at faults. Applying both the structural and functional dominance and
equivalence collapsing techniques [1, 13, 84, 85] reduced the targeted fault list to 11.
4.2 Independent Fault Set
A clique is a subgraph in which every node is connected to every other node, making the
subgraph fully connected [19]. So in a clique each node is independent of every other node.
The largest clique, i.e., the clique with the largest number of nodes in the independence
graph is called an independent fault set (IFS) of the circuit. Akers et al., in their classical
paper on test minimization [5], gave a theorem about the lower bound on the single-detect
test sets.
26
4321 5
11
10
9
876
Figure 4.2: Independence graph of c17 showing an independent fault set.
4.2.1 Lower Bound on Single-Detect Tests
Theorem 1: The size of the largest clique in the independence graph (the independent
fault set) is a lower bound on the single-detect test set size [5].
From the independence graph of c17, redrawn in Figure 4.2, it is clear that nodes
(faults) 1, 2, 4 and 5 form an independent fault set (IFS). So, in order to detect all faults
in c17, we need at least 4 vectors. Actually, the minimum possible test set for c17 has
four vectors. In general, however, the size if IFS is only a theoretical minimum and such a
test set whose size equals the lower bound may not be achieved in practice for many circuits.
27
4.2.2 Lower Bound on N-Detect Tests
Theorem 2: The lower bound on the size of an N-detect test set is N times the size
of the largest clique (independent fault set) in the independence graph.
Proof: Consider the faults that form an independent fault set of the independence
graph. Suppose we generate N test vectors for a fault in the IFS. These vectors cannot
detect any other fault in the IFS. For the second fault we therefore generate N new test
vectors, which will neither detect the first fault nor any other fault in the IFS. Thus, the N-
detect test set for the two faults contains 2N vectors. To detect all faults in the independent
fault set N times we must apply this procedure independently to each fault, thus producing
N distinctly di?erent vectors each time. These N ?|IFS| vectors are needed to cover all
faults in the IFS N times. However, they may or may not cover the remaining faults of the
circuit. Hence, this number is only a lower bound on the N-detect test set.
Figure 4.3 illustrates the application of the arguments of the above proof to the IFS
of c17. If N vectors are generated for the fault-1, those vectors cannot detect any of the
remaining 3 faults in the IFS. So, we need N more vectors to detect fault-2 N times, but
those vectors cannot detect either of the remaining 2 faults. So, totally 4N vectors are
needed to detect each fault in the independent fault set, which means that at least 4N
vectors are needed to detect all the faults in c17 N times.
The progressive explanation of the independence fault set concept makes the newly
derived Theorem 2 look like a general outcome of Theorem 1 [5]. The new theorem does
not seem so obvious when we consider the fact that a minimal single-detect test set actually
detects about 70% of all faults at least twice, but then it is Theorem 2 that tells us that
28
1
N test
Vecs
5
N test
Vecs
2
N test
Vecs
4
N test
Vecs
Figure 4.3: Faults in the independent fault set of c17.
we need another complete single-detect test set to cover the remaining 30% of the faults a
second time. Theorem 2 is general and does imply Theorem 1.
4.2.3 An Example
The 74181 4-bit ALU [43] is used to compare the N-detect lower bounds and the
practically achievable minimums (see Theorem 3 in Chapter 5). The results, given in
Table 4.1, are derived using an N-detect test minimization method [53] we propose in
Chapter 5.
The size of the largest clique (independent fault set) of the 74181 ALU is 12. So, from
Theorem 1 [5], the lower bound on the size of the single-detect test set is 12. As observed,
there exists a minimal single-detect test set of size 12. As pointed out earlier, that may not
be the case always. From the independence graph, it is clear that any fault of the circuit
that is not in the independent fault set will be compatible with one or more of the faults
in the IFS. When each fault in the IFS forms its own set containing only faults that are
29
Table 4.1: Results of a 4-bit ALU (74181).
N Theorem 2 (Lower Bound) Theorem 3, Chap. 5 (Practical Minimum)
1 12 12
10 120 120
20 240 240
30 360 360
40 480 480
50 600 607
60 720 742
70 840 877
80 960 1012
90 1080 1147
96 1152 1228
compatible with itself, then every fault in the circuit can be grouped into one of those fault
sets [2, 23]. Now if there exists a single vector which detects all faults in a fault set and such
a vector is found for each fault set, then we will have a complete coverage test set whose
size will equal the IFS. If faults in any of the fault sets cannot be covered by a single vector,
then the number of vectors needed for complete coverage will exceed the lower bound.
From Theorem 2, the lower bound on the size of an N-detect test set must be 12N.
InTable4.1thereareseveralN-detect tests with 12N vectors. But the N-detect test set
size start to diverge from the lower bound around N = 50. This may be because there do
not exist N di?erent vectors which can detect all faults in one of the fault sets. So, two or
more vectors will be needed to detect all faults in that fault set, resulting in a larger test
set than the lower bound.
From the bridging coverage estimate (BCE) test metric discussed in Chapter 2, it is
clear that N-detect tests of orders more than 10?15 may not further improve bridging
defect coverage. But here N-detect tests of orders much higher than 10 are considered to
30
practically illustrate the result of the proposed theorem and to demonstrate the divergence
of minimal test sets from their lower bounds.
4.3 Classification of Combinational Circuits
It is observed that some circuits achieve their minimal test sets with greater ease
compared to other circuits. The ISCAS85 benchmarks were investigated and analyzed to
study the structural properties of the circuits [42], which contribute to these di?erences.
Based on their structures, combinational circuits can be classified as type-I or type-II.
4.3.1 Type-I Circuits
Type-I circuits can be classified as narrow and deep circuits with output cones having
large overlap. Figure 4.4 gives an illustration of a type-I circuit. In these circuits, the
P
R
I
M
A
R
Y
I
N
P
U
T
S
PO1
PO2
PO3
F
1
X
X
F
2
X
F
3
F
4
X
Figure 4.4: Type-I circuit.
output cones PO1, PO2 and PO3 have large overlap between them. Any vector detecting a
31
fault F
2
will have high probability of detecting other faults, say F
3
or F
1
. The benchmarks
c499, c1355 and c1908 can be considered as type-I circuits [42].
4.3.2 Type-II Circuits
Type-II circuits are classified as wider and shallow circuits with output cones having no
or very small overlap between them. Figure 4.5 illustrates the topology of a type-II circuit.
Here the output cones PO1, PO2, PO3 and PO4 have almost no overlap between them.
F
1
X
F
3
X
F
2
X
F
4
X
P
R
I
M
A
R
Y
I
N
P
U
T
S
PO1
PO3
PO4
PO2
Figure 4.5: Type-II circuit.
So any vector targeted for detecting a particular fault will have a much lower probability
of detecting any other fault. The benchmarks c880, c2670 and c7552 can be considered as
type-II circuits [42].
32
4.3.3 Ripple-Carry Adders
Generally, any circuit may not be exactly classified as type-I or type-II. It is the domi-
nating property of the circuit which decides the nature of the circuit. This can be illustrated
using the example of a ripple-carry adder shown in Figure 4.6.
1-b
1-b
1-b
1-b
Figure 4.6: Hierarchical structure of ripple-carry adder.
N-detect tests were generated for the ripple-carry adder, constructed as an array of
identical full-adders. The minimum number of single-detect vectors needed to detect all
gate-level single stuck-at faults is 5, irrespective of the size of the adder. Figure 4.7 shows
minimized vector sets obtained by the new N-detect test minimization technique of Chap-
ter 5. An iteration here means that an additional vector set from a random-seed based test
generator has been included in the pre-minimization vector set. The test set size rapidly
33
16?bit adder
15
10
5
0
1?b 2?b 4?b
8?b
Minimized test set size
Iterations
1 10 100
Minimum test set size = 5
for ripple?carry adders of all sizes
Figure 4.7: Minimized test sets of ripple-carry adders.
converges to 5 for the 1-bit adder in 1 iteration, 2-bit adder in 3 iterations and 4-bit adder
in 5 iterations. The 8-bit adder required 55 iterations for the optimal test set. For the
16-bit adder, the convergence became asymptotic, with the test set size remaining 7 even
after 100 iterations.
Figure 4.8 shows the gate-level structure of the full-adder used as the basic cell in the
ripple-carry adders. From the figure it is clear that the two output cones of the full-adder
have reasonable overlap among them. This is a type-I structure. However, there is also
a good amount of disjointness among the cones. When larger adders are built by adding
ripple-carry stages, the number of output cones increases. Gradually more non-overlapping
cones start appearing. As a result, type-II behavior begins to dominate. This should be the
reason why small adders behave as type-I circuits while large ones are type-II in nature.
34
A
i
B
i
C
i
C
i+1
S
i
Figure 4.8: Structure of the full-adder used in ripple-carry adders.
35
Chapter 5
ILP Method for N-Detect Tests
In this chapter, a new N-detect test minimization method is presented. We express the
test minimization problem as a set covering problem and use integer linear programming
(ILP) technique to find an optimal solution. Results obtained for this method are compared
with those of a previously published N-detect approach [64].
5.1 Test Set Minimization Problem as a Set Covering Problem
In a set covering optimization problem (simply referred to as set covering problem)
several sets, which may have some elements in common, are given as inputs and a minimum
number of these sets is to be selected so that the selected sets contain ?all? the elements
contained in the original sets.
An instance (X,F) of the set-covering problem consists of a finite set X and a family
F of subsets of X, such that every element of X belongs to at least one subset in F:
X =
uniondisplay
S?F
S
We say that a subset S ? F covers all elements of X. Now the set covering problem is to
find a minimum size subset C ? F whose members cover all of X:
Minimize C such that,C? S (5.1)
36
We say that any C satisfying the Conditions 5.1 will be the optimal subset S,whichcovers
X [19]. It can be observed that this set covering problem is an abstraction of commonly
arising combinatorial problems. The set covering problem is proven to be NP?complete [56].
In order to convert the test minimization problem to a set covering problem, the de-
tected faults are considered as the finite set X and the test vectors, each detecting a subset
of the faults, are considered as the family F of subsets of X. Now the objective is to
minimize C, the subset of the family F of subsets of X which covers all of X the required
number of times.
5.2 Realization using Integer Linear Programming
Suppose we have a set of k vectors that detects every fault at least N times. We
use diagnostic fault simulation, i.e., fault simulation without fault dropping, to identify
the vector subset T
j
that detects a fault j, for all j. We assign an integer-valued variable
t
i
?{0,1} to ith vector such that t
i
=1meansthatith vector should be included in the
minimal vector set and t
i
=0meansthatith vector should be discarded. The problem of
finding the minimal N-detect set then reduces to assigning values to t
i
?s such that:
Objective function : minimize
k
summationdisplay
i=1
t
i
(5.2)
under the following constraints:
summationdisplay
t
i
?{T
j
}
t
i
? N
j
, ? faultsj (5.3)
t
i
?{0,1}
37
where N
j
is the multiplicity of detection for the jth fault. N
j
can be selected for individual
faults based on some criticality criteria, like the coupling capacitance associated with each
node, etc. Such a procedure has been proposed for defect-oriented test minimization [92].
This, however, may require the layout-level data available only after the physical design has
been completed. Actually, N
j
can be limited by the capability of the initial vector set. To
target for a minimized N-detect test, we have simply assumed all N
j
stobeequaltoN.An
integer linear program (ILP) solver [31, 48] can then find the [0, 1] values of the variables
{t
i
} that define a minimal N-detect vector set. For N = 1, the ILP produces the minimal
single-detect test set [30, 44, 89].
Theorem 3: When the minimization is performed over an exhaustive set of vectors of a
combinational circuit, any ILP solution that satisfies expressions 5.2 and 5.3 is a minimum
N-detect test set.
Proof: Expressions 5.2 and 5.3 define a solution space and a set of conditions. Con-
straints 5.3 ensure that a set of vectors covers all faults. It is known that an ILP solution
when feasible is optimal [66, 74]. When the constraints 5.3 are specified for the exhaustive
set of patterns and a minimal set is found from ILP, that set will be the global minimum
because it is taken out of the entire solution space.
5.2.1 Example
Table 5.1 shows the ILP solution for the 74181 four-bit ALU circuit. The circuit has 14
primary inputs and therefore the exhaustive set contains 2
14
= 16,384 vectors. Diagnostic
fault simulation of these vectors by Hope [63] required 14.3 seconds on a Sun Ultra 5 with
256MB RAM. Also shown in the table are 2,370 vectors generated by Atalanta [62] to detect
38
Table 5.1: N-detect tests for 74181 ALU.
16,384 exhaustive vectors 2,370 Atalanta 10-detect vectors
N Minimized ILP Minimized ILP
vectors CPU s vectors CPU s
1 12 87.47 12 5.19
2 24 63.09 24 8.23
3 36 70.56 36 5.53
4 48 72.12 48 6.53
5 60 65.06 60 5.69
8 96 71.01 96 5.46
9 108 88.01 109 6.24
10 120 68.82 122 5.32
20 240 79.56 262 5.93
40 480 66.08 - -
each fault at least 10 times. This circuit has a collapsed set of 237 detectable faults and 10
vectors were generated for each fault. Diagnostic simulation of these 2,370 vectors by Hope
required 2.3 seconds.
Notice that the ILP CPU times remain about the same for all orders of N. Generally,
ILP time depends on the number of constraints in the problem and also on the size of the
constraints, but here it does not seem to depend on the value of N.Wealsoobservethat
the optimized Atalanta vector set starts to diverge from the lower bound, i.e., 12N,for
N ? 9. This is because not all test vectors of each fault are included in the initial set and
ILP might not find the right vector. That is why Theorem 3 does not guarantee optimality
in this case.
5.3 Derivation of N-Detect Tests
We generate an unoptimized M-detect test set by an ATPG program, where M ? N.
In our case, this was done using Atalanta [62], which was originally designed to generate
a single-detect test set. Interestingly, repeated runs of the program produce di?erent test
39
sets. This is because di?erent random vectors (due to a di?erent seed in the random
number generator) are used each time to initially cover easy-to-detect faults. In most cases,
by combining M single-detect test sets we could get the required set.
A simple analysis of vectors is used to remove any repeated vectors. This is important
for a correct result from the ILP when N>1. Next, a fault simulator (Hope [63], in our
case) was used for diagnostic fault simulation. In fact, any of the fault simulators [73, 90]
can be used for this purpose. The fault simulator determines the vector set {T
j
} for every
fault j.If|{T
j
}|