This Is AuburnElectronic Theses and Dissertations

Show simple item record

Toward Zero Backtracks in Test Pattern Search Algorithms with Machine Learning


Metadata FieldValueLanguage
dc.contributor.advisorMillican, Spencer
dc.contributor.authorRoy, Soham
dc.date.accessioned2021-07-19T19:43:28Z
dc.date.available2021-07-19T19:43:28Z
dc.date.issued2021-07-19
dc.identifier.urihttps://etd.auburn.edu//handle/10415/7836
dc.description.abstractA digital circuit with n primary input lines has N = 2^n possible input vectors and a test vector that detects a fault in the circuit may be among those 2^n n-bit combinations. A pure random test generator can generate these test vectors, but this is inefficient due to its random generation of n-bit combinations. Various testing algorithms were developed and implemented over six decades to overcome this inefficiency. Classic algorithms like D algorithm, PODEM, and FAN laid the foundations over which other algorithms were built upon to improve the search time for test vectors. Because the search for tests for hard-to-detect faults in a circuit has exponential complexity, test generation, whether performed randomly or algorithmically, is computationally expensive. Backtrack is one of the essential activities in ATPG algorithms that directly impacts search time. In colloquial terms, backtrack means the algorithm took a bad decisions when determining which circuit inputs should be set to achieve an objective to find a test vector. Contemporary algorithms use various circuit topological information and testability measures as heuristics to reduce backtracks and improve search time. Patel and associate concluded from their experiments that rather than using a single testability measure with a high backtrack limit, it is more efficient to use multiple testability measures successively with lower backtrack limits. However, the use of multiple testability measures successively as ATPG heuristic still remains quite expensive in test generation time. To address unmanageable time complexity, engineers often rely on human ``hunches" and heuristics learned through experience. Training machines to adopt these human skills is known as machine learning (ML) or machine intelligence (MI). This dissertation examines MI for its ability to enhance automatic test pattern generation (ATPG) by the combining circuit topological information and testability measures as a novel heuristic to reduce backtracks. Instead of a conventional heuristic to guide backtracing directions, this work uses MI algorithms. The guidance can come from unclassified data in which we find patterns -- known as unsupervised learning -- or it can come from a database of training problems with desired outcomes, known as supervised learning. The ML framework applied to ISCAS’85 and ITC’99 benchmark circuits showed significant improvements in ATPG performance as reduced backtracks and computation time. Initial experiments found a significant decrease in computation time and backtracks with basic MI structures and training. \par In this research, a PODEM ATPG program is implemented with ML-based guidance for backtraces. Initially, basic trained-ANN guidance is found to reduce backtracks and CPU time over any single heuristic guidance. Then, an optimally-trained-ANN guidance enhances the ATPG performance. Next, principal component analysis (PCA) combines several heuristics to train the ANN. The PCA-trained-ANN guidance produced the best ATPG performance.en_US
dc.subjectElectrical and Computer Engineeringen_US
dc.titleToward Zero Backtracks in Test Pattern Search Algorithms with Machine Learningen_US
dc.typePhD Dissertationen_US
dc.embargo.statusNOT_EMBARGOEDen_US
dc.embargo.enddate2021-07-19en_US

Files in this item

Show simple item record