|dc.description.abstract||Since the advent of the modern von Neumann computer in the 1940s,
startling advances have been made in computing technology
with the creation of innovative, reliable, and faster electronic components,
from vacuum tubes to transistors.
Computing power has risen exponentially over relatively brief periods of time
as Moore's law projected a salient trend for the growth in memory-chip performance,
estimating the capacity of the integrated circuit to double every 18 months.
Although these developments were essential in solving computationally intensive problems,
faster devices were not the sole contributing factor to high performance computing.
Since electronic processing speeds began to approach limitations imposed by the laws of physics,
it became evident that the performance of uniprocessor computers was limited.
This has led to the prominent rise of parallel and distributed computing.
Such systems could be homogeneous or heterogeneous.
In the past decade homogeneous computing
has solved large computationally intensive problem
by harnessing a multitude of computers via a high-speed network.
However, as the amount of homogeneous parallelism in applications decreases,
the homogeneous system cannot offer the desired speedups.
Therefore, heterogeneous architectures to exploit the heterogeneity in computations
came to be a critical research issue.
In heterogeneous computing (HC) systems
consisting of a multitude of autonomous computers,
a mechanism that can harness these computing resources efficiently
is essential to maximize system performance.
This is especially true in mapping tasks to heterogeneous computers
according to the task computation type,
so as to maximize the benefits from the heterogeneous architecture.
The general problem of mapping tasks onto machines is known to be NP-complete,
as such, many good heuristics have been developed.
However, the performance of most heuristics is susceptible to the dynamic environment,
and affected by various system variables.
Such susceptibility makes it difficult to choose an appropriate heuristic.
Furthermore, an adaptable scheduler has been elusive to researchers.
In this research, we show that using a support vector machine (SVM) an elegant scheduler can be constructed which is capable of making heuristic selections dynamically and which adapts to the environment as well.
To the best of our knowledge, this research is the first use of SVM to perform schedule selections in distributed computing.
We call the novel meta-scheduler, support vector scheduler (SVS).
Once trained, SVS can perform the schedule selections in O(n) complexity, where n is the number of tasks.
Using rigorous simulations, we evaluated the effectiveness of SVS in making the best heuristic selection. We find that the average improvement of SVS over random selection is 29%, and over worst selection is 49%. Indeed, SVS is only 5% worse than the theoretical best selection.
Since SVS contains a structural generalization of the system,
the heuristic selections are adaptive to the dynamic environment in terms of task heterogeneity and machine heterogeneity.
Furthermore, our simulations show that the SVS is highly scalable with number of tasks as well as number of machines.||en_US