|dc.description.abstract||In smart manufacturing, data management systems are built with a multi-layer architecture, in which the most significant layers are the edge and cloud layers. The edge layer, not surprisingly, renders support to data analysis that genuinely demands low latency. Cloud platforms store vast amounts of data while performing extensive computations such as machine learning and big data analysis. This type of data management system has a limitation rooted in the fact that all data ought to be transferred from the equipment layer to the edge layer in order to perform thorough data analyses. Even worse, data transferring adds delays to a computation process in smart manufacturing. In the first part of the dissertation studies, we investigate an offloading strategy to shift a selection of computation tasks toward the equipment layer. Our computation offloading mechanism opts for smart manufacturing tasks that are not only light weighted but also do not require saving or archiving at the edge/cloud. We demonstrate that an edge layer is able to judiciously offload computing tasks to an equipment layer, thereby curtailing latency and slashing the amount of transferred data during smart manufacturing. Our experimental results confirm that the proposed offloading strategy offers the capability for data analysis computing in real-time at the equipment level - an array of smart devices are slated to speed up the data analysis process in semiconductor manufacturing. With collected data, we apply the empirical results as training and testing data to construct a machine learning model that recommends whether it is advantageous to offload computation from the edge layer to the equipment layer based on the current system status.
In the second part of the dissertation, we elaborate on a novel scheduler -- a scheduling algorithm that allocates edge computing resources with awareness of workload at the equipment layer. Our edge scheduling algorithm is adroit at determining the most appropriate scenarios to offload computing tasks from edge to equipment, thereby maximizing throughput while meeting the priority requirements of the tasks. A limitation of current research on edge scheduling is that available resources from the equipment layer were not used to achieve maximum throughput. The main difference between our scheduling algorithm and the other edge scheduling techniques is that we use available resources at the equipment layer. Other State-of-the-Art scheduler algorithms are not considering using resources at the equipment level. By using additional resources at the equipment level, our experimental results shows that the total computation time has been shortened by 27.75% and the throughput has increased by 38.45% comparing to Hybrid Computing Solution or HCS scheduling performance, a State-of-the-Art scheduling algorithm.
Moreover, to enable offloading computation tasks from the edge layer to the equipment layer, the edge layer ought to be able to assign specific computation tasks to the equipment. In semiconductor manufacturing, the host computer located at the edge layer communicates to the equipment through SECS/GEM communication protocol. As the last piece in this dissertation, we design an advanced protocol on the SECS/GEM interface to facilitate the transfer of computational tasks from the edge to the equipment. Current research on Equipment level Fault Detection and Classification (FDC) suggested to build a software module at the equipment layer to perform computation. The limitation of this technique is that it requires software modification every time the computation logic changes. Furthermore, this technique is not flexible to allow the equipment to perform any other computation tasks besides FDC. With the new protocol in place, the host has the capability to dynamically assign data analysis tasks to the equipment. Additionally, the protocol also offers a mechanism for the equipment to report back the analysis results to the host.||en_US