Comparison of Aerial Collision Avoidance Algorithms in a Simulated
Environment
by
James Holt
A thesis submitted to the Graduate Faculty of
Auburn University
in partial ful llment of the
requirements for the Degree of
Master of Science
Auburn, Alabama
May 6, 2012
Keywords: Unmanned Aerial Vehicle, Collision Avoidance
Copyright 2012 by James Holt
Approved by
Saad Biaz, Chair, Associate Professor of Computer Science and Software Engineering
David Umphress, Associate Professor of Computer Science and Software Engineering
Hari Narayanan, Professor of Computer Science and Software Engineering
Abstract
In the eld of unmanned aerial vehicles (UAVs), several control processes must be ac-
tive to maintain safe, autonomous ight. When ying multiple UAVs simultaneously, these
aircraft must be capable of performing mission tasks while maintaining a safe distance from
each other and obstacles in the air. Despite numerous proposed collision avoidance algo-
rithms, there is little research comparing these algorithms in a single environment. This
paper outlines a system built on the Robot Operating System (ROS) environment that
allows for control of autonomous aircraft from a base station. This base station allows a re-
searcher to test di erent collision avoidance algorithms in both the real world and simulated
environments. Data is then gathered from three prominent collision avoidance algorithms
based on safety and e ciency metrics. These simulations use di erent con gurations based
on airspace size and number of UAVs present at the start of the test. The three algorithms
tested in this paper are based on mixed integer linear programming (MILP), the A* al-
gorithm, and arti cial potential elds. The results show that MILP excelled with a small
number of aircraft on the eld, but has computation issues with a large number of aircraft.
The A* algorithm struggled with small eld sizes but performed very well with a larger
airspace. Arti cial potential elds maintained strong performance across all categories be-
cause of the algorithm?s handling of many special cases. While no algorithms were perfect,
these algorithms demonstrated the ability to handle up to eight aircraft on a 500 meter
square eld and sixteen aircraft safely on a 1000 meter square eld.
ii
Acknowledgments
First, I want to thank my Lord and Savior Jesus Christ for providing all the provisions
I?ve needed to make it this far. He?s guided me every step of the way, and without him, this
would not have been possible.
I would like to express my deepest gratitude to Dr. Biaz for his guidance, support,
and opportunity to work with him during my graduate studies at Auburn University. I
would also like to thank all the members of the Auburn 2011 REU site for their algorithms
development that made this study possible: Waseem Ahmad, Travis Cooper, Ben Gardiner,
Matthew Haveard, Tyler Young, Andrew Kaizer, Thomas Crescenzi, James Carroll, Jared
Dickinson, Jason Ruchti, and Chip Senkbell.
I?d like to thank the National Science Foundation for their continued support of the
REU site at Auburn University and for their support of the Aerial and Terrestrial Testbed
for Research in Aerospace, Computing, and maThematics (ATTRACT) through grants CNS
#0855182 and CNS #0552627. I?d also like to thank everyone who?s worked on the AT-
TRACT project to get it to the point where it is today. Special thanks to Alex Farrell,
Jared Harp, and John Harrison for all the hardware assistance with both the UAVs and
circuitry.
I?d like to thank my family who has supported me every step of the way and encouraged
me to succeed in computer science. Finally, I?d like to show my utmost gratitude to my wife,
Caroline, who has provided unwavering support and love throughout this whole process.
Love you, Caroline.
iii
Table of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 General Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Physical Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Analyzing Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Survey of Aerial Collision Avoidance Algorithms . . . . . . . . . . . . . . . . . . 4
2.1 Geometric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Evolutionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Mixed-Integer Linear Programming (MILP) . . . . . . . . . . . . . . . . . . 6
2.4 Grid-based approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Arti cial Potential Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Mixed Integer Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1 Mixed-Integer Linear Programming (MILP) Basics . . . . . . . . . . . . . . 10
3.2 Constraints on the Function Solver . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Problem Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Handling Multiple Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.5 Receding Horizon Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 A* Grid-Based Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
iv
4.1 A* Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Dynamic Sparse A* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Creating a Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.1 Background Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.2 Predicting Plane Locations . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.3 Estimation of Best Cost to Goal . . . . . . . . . . . . . . . . . . . . . 22
4.4 Search with Dynamic Sparse A* . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Arti cial Potential Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.1 Simple Arti cial Potential Field (APF) Implementation . . . . . . . . . . . . 25
5.2 Calculating Force Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3.1 Handling Deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3.2 Right Hand Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.3.3 Aircraft Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.4 Aircraft Looping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6 Test-bed Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1 Test-bed Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Pre-existing Hardware Con guration . . . . . . . . . . . . . . . . . . . . . . 36
6.2.1 Base ArduPilot Con guration . . . . . . . . . . . . . . . . . . . . . . 36
6.2.2 AU Proteus modi cations . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3 Pre-existing Software Con guration . . . . . . . . . . . . . . . . . . . . . . . 37
6.3.1 ArduPilot Modi cations . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3.2 AU Proteus JAVA Controller . . . . . . . . . . . . . . . . . . . . . . 37
6.4 Robot Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.2 Messages and Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.3 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
v
6.5 Core Control Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.5.1 X-Bee IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.5.2 Coordinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5.3 Control Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.6 Core Message Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.6.1 Telemetry Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.6.2 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.7 Research Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.7.1 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.7.2 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.7.3 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.1 Number of Con icts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.2 Collision Reduction and Survival Rate . . . . . . . . . . . . . . . . . . . . . 48
7.3 Aircraft Life Expectancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.4 E ciencies of the Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.5 Algorithm Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.5.1 MILP performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.5.2 A* performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.5.3 Arti cial Potential Fields performance . . . . . . . . . . . . . . . . . 57
7.5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A Links to source code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
vi
List of Figures
2.1 Relative motion of two UAVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1 Turning Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 RHC versus Full Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 Turning angle and constant speed constraints on a grid . . . . . . . . . . . . . . 20
4.2 Danger grids through time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 A plane?s predicted path. The yellow path is a predicted turn. The purple
represents the path to an intermediate waypoint and the green is the nal path
taken to the planes goal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4 Plane bu er zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1 Determining the maximum distance of the repulsive force eld . . . . . . . . . . 27
5.2 Geometry of calculating the repulsive force acting on Ui . . . . . . . . . . . . . 28
5.3 Calculating the total force acting on a UAV . . . . . . . . . . . . . . . . . . . . 29
5.4 Adjusting the repulsive force to handle head-on collisions . . . . . . . . . . . . . 30
5.5 Situation in which the right hand turn rule should not be applied. Ui should
continue towards its destination instead of traveling behind Uk. . . . . . . . . . 31
5.6 Finding the new repulsive force vector in order to travel behind Uk . . . . . . . 32
5.7 Geometry involved to detect a looping condition. . . . . . . . . . . . . . . . . . 34
6.1 The layout and interactions of ROS nodes in this system. Nodes are ovals with
services represented by dashed lines and messages by solid lines [5]. . . . . . . . 40
7.1 Average con ict reduction on a 500x500 meter eld . . . . . . . . . . . . . . . . 47
7.2 Average con ict reduction on a 1000x1000 meter eld . . . . . . . . . . . . . . . 48
7.3 Average collision reduction on a 500x500 meter eld . . . . . . . . . . . . . . . 49
7.4 Average collision reduction on a 1000x1000 meter eld . . . . . . . . . . . . . . 50
vii
7.5 Average survival rate on a 500x500 meter eld . . . . . . . . . . . . . . . . . . . 50
7.6 Average survival rate on a 1000x1000 meter eld . . . . . . . . . . . . . . . . . 51
7.7 Average increase in life expectancy on a 500x500 meter eld . . . . . . . . . . . 52
7.8 Average increase in life expectancy on a 1000x1000 meter eld . . . . . . . . . . 53
7.9 Average increase in the number of waypoints achieved on a 500x500 meter eld 54
7.10 Average increase in the number of waypoints achieved on a 1000x1000 meter eld 54
viii
List of Tables
7.1 Table showing the best performing algorithm for each category . . . . . . . . . 59
ix
List of Abbreviations
APF Arti cial Potential Fields
DSAS Dynamic Sparse A* Search
GPS Global Positioning System
MILP Mixed-Integer Linear Programming
PCA Point of Closest Approach
ROS Robot Operating System
SAS Sparse A* Search
UAV Unmanned Aerial Vehicle
x
Chapter 1
Introduction
One area of technological research that has been growing increasingly important over
the last decade is the eld of unmanned aerial vehicles (UAVs). In particular, UAVs are
being used for a large number of surveying applications. This includes surveying an area of
land, points of interests, or other mobile vehicles. However, in order to protect these UAVs,
algorithms are needed to route them safely through the airspace. In particular, missions
involving more than one UAV need these algorithms for safety.
Many algorithms have been proposed and even implemented to help manage this UAV
safety problem. Some algorithms work by generating maneuvers on a per second basis
whereas others work by planning a path far into the future. Despite all the variations in
collision avoidance, there is almost no work comparing them on a singular system. The goal
of this research is to take some of these prominent algorithms and evaluate their e ectiveness.
1.1 General Problem
Before any algorithms were chosen for experimentation, the problem rst had to be
de ned along with some goals. The base problem is to test and compare collision avoidance
algorithms controlling multiple aircraft to determine which algorithms are the best. This
meant that there needed to be a standard test bed to perform all these calculations on and
a standard set of rules for the simulations. Since this problem will eventually be tested in a
real-world setting, some of these standards were chosen based on real model aircraft while
others were made to help frame the problem better.
1
One of these framing standards was to limit the collision avoidance algorithms to a
two dimensional airspace with the idea of eventually expanding the simulations to three di-
mensions. The basic idea being this limitation was that if an algorithm can safely navigate
aircraft through a two dimensional space, it should be able to do it through a three dimen-
sional space after some modi cations. Additionally, this would save time during algorithm
development by limiting the airspace. This would also allow for potential adaptation to
ground based vehicles which typically navigate using two dimensional space.
In this paper, environmental factors are also excluded from the problem. Typically, real
aircraft will experience a large number of environmental in uences including air resistance,
wind, violent weather, and unresponsive obstacles such as unknown aircraft. Furthermore,
it?s assumed all aircraft in the simulation can be tracked and that their positions are known.
This means the algorithms assume all data received from the aircraft is accurate and that
all obstacles are known by the system.
1.2 Physical Constraints
There are also constraints due to the physics of a real aircraft. Speci cally, these al-
gorithms are limited to aircraft with a maximum turn radius and constant speed. For this
problem, the maximum turning angle was limited to 22.5 /second. This means for an aircraft
to reverse its direction, it would have to spend eight seconds of ight time changing its head-
ing. The aircraft was also limited to an airspeed of 25 miles/hour which is approximately
11.176 meters/second.
This problem is also considered a real-time problem. While the algorithms can choose
how long they wish to perform a particular calculation, the simulation will continue running
during this calculation time. In particular, the simulation would generate a new update from
each aircraft at a rate of one update per second and it will continue generating those updates
and following the preset course unless the collision avoidance algorithm alerts the aircraft to
a new path or until the aircraft is determined to have a collision.
2
1.3 Analyzing Results
With the problem limitations de ned, the methods for analyzing the algorithms must
be de ned. First, collision avoidance algorithms are meant to reduce both the number of
collisions and the number of con icts of the aircraft. A collision can be de ned as two aircraft
occupying the same space. In this real world, this would typically imply destruction of both
aircraft. Because of the granularity of the simulations, a collision is de ned as two aircraft
being approximately one second of ight time away from each other. Since the airspeed of
the aircraft is a constant 11.176 m/s, a circle of radius 12 meters is chosen as the collision
zone for these aircraft. Similarly, a con ict is meant to represent an imminent collision
between aircraft. For this con ict zone, the radius is double to a 24 meter circle representing
the con ict zone. For these simulations, each con ict and collision is tracked but a collision
will remove any aircraft involved from future time steps to simulate the destruction of those
aircraft.
While these two de nitions describe the e ectiveness of the collision avoidance, there
needs to also be a way to describe how e cient the algorithms are at reaching the goal
waypoints. In order to do this, the increase in waypoints achieved was used. This metric
was used because it accounts for both the e ectiveness of collision avoidance and the e ciency
of the maneuvers. For example, an algorithm that loses several aircraft will have a lower
waypoints achieved, but an algorithm that takes overly cautious maneuvers will also have a
very lower number of waypoints achieved.
In the next chapter, several proposed algorithms are discussed along with some of the
known strengths and weaknesses of those algorithms. Then, three of those algorithms are
discussed further, implemented, and modi ed to improve on some of the known or discov-
ered weaknesses. Despite all the di erences in these equations, these algorithms are then
implemented on a single common testbed for analysis. Finally, the results from the test bed
simulations are presented along with some analysis of the di erent algorithms.
3
Chapter 2
Survey of Aerial Collision Avoidance Algorithms
2.1 Geometric Approach
Geometric methods for collision detection and maneuvering are based on a couple of
simple geometric principles. For this particular approach, the point of closest approach
(PCA) algorithm was examined. In this algorithm, the aircraft involved are typically con-
sidered a single point mass with a constant velocity vector denoting speed and direction [8].
By extending these velocity vectors out from the point mass, it?s possible to determine how
close two aircraft will come to each other, a distance referred to as ~rm, the \miss distance
vector" [8]. Referring to gure 2.1, if ^c is the unit vector representing the di erence in the
velocities of the two aircraft and ~r is the relative distance between the point masses, then
~rm is de ned:
~rm = ^c (~r ^c) (2.1)
If this miss distance vector is lower than a pre-de ned safe threshold, typically the con-
ict distance, then the algorithm knows that it needs to re-route one or both of the aircraft.
Further vector manipulation is done on the miss distance vector in order to determine the
maneuvers that need to take place to resolve the con ict [8].
One of the bene ts of geometric approaches is the fact that typically, they can be
implemented in both 2D and 3D spaces by modifying the point mass and velocities to include
a z component. In addition, it?s already assumed that the aircraft maintain a constant speed
and ignores environmental components [8]. This algorithm is also a very e cient algorithm
in terms of time. With only two aircraft, it?s fairly trivial to determine if their paths are in
4
Figure 2.1: Relative motion of two UAVs
con ict and then how to re-route those aircraft. However, this particular algorithm is only
shown to work for two aircraft. As the number of aircraft increase, the problem of checking
for con ict and re-routing becomes much more complex because you have to factor in each
point mass and vector.
2.2 Evolutionary
The evolutionary approach is a method of handling collision avoidance through adapta-
tion of a known path. To start, the algorithm creates a random set of solutions from the start
point to the end [10]. From this point, the algorithm enters a cyclical series of steps. First,
each path is mutated based on one of the following four techniques and that new mutated
path is added to the pool [10]:
1. Mutate and propagate - change one segment of the path and modi ed the rest of the
path to reach the endpoint
2. Crossover - uses the start of one path and the end of another path and combines them
using a \point-to-point-join function"
3. Go to goal - takes a point near the end of a path and performs the point-to-point-join
function from that point to the end
5
4. Mutate and match - mutate one or more of the segments in the path, then attempt to
connect the end of the mutated segments to the end portion of the path
This e ectively doubles the pool size by creating a new, randomly-mutated path for
each old path. Then, this pool goes through an evaluation based on a variety of techniques
such as feasibility, constraints, goals, etc [10]. The most t segments are kept within the
pool and the others are pruned away until the pool is back to its original size [10]. This
is one generation of the evolutionary approach, but in most scenarios, multiple generation
would be calculated before returning anything to the aircraft.
This approach has a couple of bene ts because of its system of execution. First, it?s
easily adaptable to a real-time environment. WIth the evolutionary approach, the algorithm
can continuously create new random generations and can return a path (not necessarily
the best path) at any point. Additionally, the iterative nature allows it to receive updated
GPS data and modify its paths based on this new data [10]. This makes the algorithm
extremely easy to adapt to a real-time system. One of the downfalls of this process is that
the evolutionary algorithm is not guaranteed to ever nd the optimal solution to a problem.
However, it is still very e ective at nding a solution and continuously adapting that solution
as time passes.
2.3 Mixed-Integer Linear Programming (MILP)
While some of these approaches directly address the problem of collision avoidance,
the MILP method is actually an adaptation of this problem into a set of constraints [11].
These constraints are then used as inputs into an MILP solver, a program that takes a
set of constraints and attempts to nd an optimal solution [11]. Today, there are many
commercially available MILP problem solvers available for purchase.
With one of these solvers, the only things left to create are the input constraints. In
order to adapt the problem for MILP, a few basic constraints need to be considered. First,
each UAV is mapped into the program with it?s mechanical state: position, velocity, and
6
acceleration [11]. This is necessary because the solver needs to know both where the UAVs
are positioned and it needs to insure that the UAVs maintain a safe distance while solving.
In turn, the constraints on safe distance must also be constraints to the problem solver
[11]. Finally, the goal to minimize traveling time must be entered as a constraint on the
system[11]. This gives the problem a goal and a method of evaluating success of each of the
solutions calculated by the solver.
One of the problems with the MILP method is that it can be extremely time-consuming
to calculate an optimal solution due to the in nite number of possibilities. Even with the
relatively quick MILP solvers, these problems are grow exponentially with each UAV added
to the equation. This means that for MILP to be a valid solution to collision avoidance, the
problem needs to be adapted for real-time and the problem needs to be heavily constrained
so that the time complexity goes down.
2.4 Grid-based approaches
Previous algorithms have used the aircraft as the focal points for collision avoidance.
However, there are several algorithms that instead focus on a grid representing the airspace.
These algorithms work by rst taking the airspace and turning it into a discrete grid of
squares (or cubes in a three dimensional environment). After generating this grid, each
square is assigned values based on the algorithm. Typically, these values represent occupied
grid-space either through static or dynamic obstacles.
After generating this grid, the algorithm uses the grid to generate a path for the aircraft.
One of these speci c algorithms is known as A*(spoken as \A-star"). In this algorithm, the
grid is used as a graph that starts at the aircraft?s initial position. From there, the idea is
to consider nodes that the aircraft could travel to. Then, each of these nodes are assigned
ratings that represent the best possible path to the destination. As each node is considered,
it?s added to a heap with the lowest cost element at the top of the heap. The process of
rating these nodes is called \branching" [17]. After branching once, it repeats the process by
7
looking at the least cost node in the heap and considering where it can branch to from there.
This recursive process is known as \bounding" the future search to only best-cost estimates
[17]. Because of this, A* is known as a branch-and-bound method of handling these complex
problems.
Grid-based problems do have some drawbacks caused by the grid. One of these is
converting the three dimensional space into a three dimensional grid. For the planet, this
means that the curvature of the earth has to be specially handled by the grid. Additionally,
whenever a space is discretized, it can sometimes be di cult to determine what spaces are
occupied by objects. For example, two aircraft in adjacent grids could be almost directly
beside each other or on the far sides of their grid space. This can be accounted for by
discretizing the airspace further, but that can leads to problems of complexity due to the
grid space getting larger.
2.5 Arti cial Potential Fields
Arti cial potential elds o ers a relatively simple method of handling collision avoidance.
In physics, particles have both positive and negative charges where particles of similar charge
repel each other and particles of opposite charge attract. Researchers discovered a way to
apply this idea to collision avoidance.
In this approach, each aircraft is considered a negative charge in order to make them
feel an arti cial repulsive force from each other [6]. Additionally, each aircraft has a goal
waypoint associated with a positive charge [6]. In order to determine where the aircraft
should travel, the repulsive forces and attractive forces are added together into one vector.
Then, this vector is used to determine what direction the aircraft should travel to remain as
safe as possible [6]. This process is repeated for each aircraft in the scenario until each has
a direction vector.
Arti cial potential elds is generally a signi cantly faster method of handling collision
avoidance compared to other algorithms. This is primarily due to the simplicity of the
8
calculation. However, the simple design makes this algorithm purely reactive in nature.
This means that each time the algorithm is executed, it makes a greedy choice based purely
on the current state of the aircraft. Because of this greedy choice, there isn?t any long-term
planning for future time steps which may result in some special cases that can?t be handled
by only the arti cial potential elds.
2.6 Conclusion
These ve algorithms were the original algorithms that were looked at for use in this
collision avoidance test. Of those ve, three algorithms were chosen to participate in this
comparison: MILP, A*, and arti cial potential elds. The geometric approach was found to
have problems with working with a large number of aircraft, one of the problem requirements.
While the evolutionary approach doesn?t have that problem, it did su er from problems
with random mutations and so more predictable algorithms were chosen for testing. MILP
and A* were both algorithms that could be potentially time consuming but were found to
be adaptable to real-time situations with the proper problem constraints. Finally, arti cial
potential elds is a well-known solution to collision avoidance that has some special situations
that can be handled with proper planning. The following chapters address these algorithms
in greater detail along with some of the changes that were made to t this speci c problem.
9
Chapter 3
Mixed Integer Linear Programming
3.1 Mixed-Integer Linear Programming (MILP) Basics
MILP is one way to adapt the collision avoidance problem to a known problem solver.
MILP is useful because it allows researchers to take a set of constraints and place them on
the input values in order to achieve a goal [11]. In the case of collision avoidance, you want
to minimize time traveling to the various waypoints while also minimizing collisions and
con icts [11]. Unfortunately, MILP solvers su er from the in nite number of possibilities
for these UAVs. With each aircraft added to the problem, the time to obtain a solution
grows exponentially. Fortunately, since the UAVs reside in the physical world, there are
several constraints that we can intuitively put on the solver to help reduce the problem.
Additionally, the computation time can be further reduced using some techniques to reduce
the problem size.
3.2 Constraints on the Function Solver
As mentioned earlier, the overall goal is to minimize the time needed to safely traverse
several waypoints. Since the speed of the aircraft is constant, this is the same as minimizing
the distance travelled to reach all of the waypoints. In order to do this, the objective function
must be de ned to re ect this goal. For this scenario, the objective function J is de ned in
Equation 3.1 where N is the number of aircraft in the problem and TFv is the time it takes
for each aircraft to reach its nal waypoint.
min J =
NX
v=1
(TFv) (3.1)
10
With this goal de ned, the solver now needs constraints on the problem. To begin,
the state of each aircraft is de ned in Equation 3.2 based on the three primary elements of
motion: position, velocity, and force (from which acceleration can be calculated).
state =
2
66
66
66
64
x
y
vx
vy
3
77
77
77
75
force =
2
64fx
fy
3
75 (3.2)
Each aircraft will have a starting state based on it?s latitude, longitude, and original
bearing which will have to be converted into a Cartesian coordinate x, y, and velocity
components. For an aircraft ying straight, the force component of the state would be zero,
denoting that the aircraft is not feeling force from a turn. With this starting state de ned,
it?s now necessary to relate each state to the previous state. Using well known physics
equations, this relationship is easily de ned using Equations 3.3, 3.4, and 3.5.
statet+1 = A statet +B acceleration (3.3)
A =
2
66
66
66
64
1 0 dt 0
0 1 0 dt
0 0 1 0
0 0 0 1
3
77
77
77
75
B =
2
66
66
66
64
1
2(dt)
2 0
0 12(dt)2
(dt) 0
0 (dt)
3
77
77
77
75
(3.4)
statet+1 =
2
66
66
66
64
xt +vxdt+ 12ax(dt)2
yt +vydt+ 12ay(dt)2
vx +axdt
vy +aydt
3
77
77
77
75
(3.5)
11
With this relationship in place, the fundamental components of solving the problem are
in place. However, there are additional physical constraints that need to be added to the
system in order to maximize the accuracy of the solution.
3.3 Problem Constraints
The problem de nition included both a constant speed and a maximum turning angle
for the aircraft, therefor the MILP solver must also be capable of accounting for both this
speed constraint and maximum turning angle or infeasible solutions might be generated [11].
In order to account for these constraints, two things must happen. First, the distance of
movement is converted into two polygons, a minimum and maximum velocity polygon. The
reason for this is because polygons are easily understood by the MILP solver and can be
used as an approximation for a circle around the aircraft. These two polygons are visually
represented by Figure 3.1.
(a) Added Turning Constraint (b) Force
Figure 3.1: Turning Constraint
In addition to this distance constraint, the turn constraint can be represented by this
polygon. Given the maximum turning angle, , and the constant speed from the problem
12
statement, we can calculate the maximum force, fmax that can safely be exerted on the
aircraft at any given moment. At this point, the solver has everything it needs to calculate
the path of one single aircraft. By modifying the constrained force exerted at each state, the
MILP solver changed the turn angle which dictates the next state?s location and velocity.
By doing this repeatedly, the solver can calculate several steps to form a complete path.
3.4 Handling Multiple Aircraft
In previous work, MILP solvers have been used for calculating paths for terrestrial
vehicles performing various tasks [11]. In order to perform these tasks, the vehicles often
had to avoid obstacles such as walls, boulders, or unsafe terrain. In order to do this, each
obstacle was mapped into the coordinate space as polygons that cannot be entered by the
vehicle [11]. With these additional constraints taken into account, the solution size gets
reduced and the problem of avoiding these obstacles is handled.
This previously implementation was used for static obstacles, but it can be easily
adapted for dynamic use. In order to implement dynamic avoidance, Richards and How
suggested treating the vehicles as rectangles and constraining the system to insure that no
vehicle entered another vehicles rectangle [11]. While the rectangle idea wasn?t used in this
implementation, the idea of a safety distance and having dynamic constraints using this
safety distance was borrowed. Instead of creating rectangles around each aircraft, the solver
uses the distance formula to calculate the distance between vehicles at each state. Each gen-
erated state must enforce that the distance between each aircraft is greater than the safety
distance. This constraint is the main component of MILP collision avoidance.
3.5 Receding Horizon Control
One of the major problems associated with MILP is the problem of size. Typically,
MILP is used to generate a one-time solution for the entire problem. Unfortunately, this
type of solution doesn?t work well in a real-time environment such as collision avoidance for
13
(a) RHC Flight Map (b) MILP Flight Map
Figure 3.2: RHC versus Full Path Planning
several reasons. The time to compute one master solution is infeasible for real-time problems,
and that time only grows exponentially with each aircraft added to the equation [14].
Fortunately, receding horizon control is one of the solutions to this problem. Receding
horizon works by limiting the number time steps into the future that the algorithm is calcu-
lating [14]. For example, instead of calculating one master solution over 10 minutes of time,
the algorithm may calculate 60 solutions that each span only 10 seconds. One of the down-
sides of the solution is a loss of optimality. Because each solution covers only a small portion
of time, each solution is only optimal for that period of time which may lead to suboptimal
solutions overall [13]. However, this suboptimal solution is still signi cantly better computa-
tionally and doesn?t vary too far from the overall optimal solution. Referring to Figure 3.2,
the receding horizon computation time total was a little under 11 seconds, whereas the total
computation was 865.9 seconds. Additionally, the receding horizon completion time was a
14
total of 13 seconds slower than the three aircraft. The most important thing to note is the
mean computation time of .0676 seconds which is fast enough to function for this real-time
system. Because of these positive results, receding horizon control is implemented in the
MILP algorithm used for testing.
15
Chapter 4
A* Grid-Based Algorithm
4.1 A* Basics
As described earlier, A* is considered a grid-based algorithm. This means that the
airspace must rst be converted to a grid and then that grid is used as a basis for the col-
lision avoidance algorithm. A* speci cally works through the \branch-and-bound" method
of handling the problem. The algorithm starts at the node containing the aircraft and
\branches" out to all nodes that the aircraft can possibly move. Then, these nodes are
organized into a heap structure with the least cost node at the top of the heap. Next, the
algorithm \bounds" the search by only considering the best possible known option which is
as the top of the heap. The algorithm will do this recursively until the goal is reached.
This recursive algorithm means that the method of estimating cost is extremely impor-
tant for the algorithm. This estimation, known as the algorithm?s heuristic, is intended to
calculate the best path from the initial node to the goal node. This cost is calculated based
on two values: g(n), the known cost from the start node to node n, and h(n), the estimated
cost of the best path from n to the goal node. The estimation, f(n), can be calculated using
Equation 4.1 [17].
f(n) = g(n) +h(n) (4.1)
One of the bene ts of this algorithm is that A* guarantees nding the optimum path
as long as two requirements are met [17]. The rst requirement is that A* should not over-
estimate the cost of h(n), the cost from n to the goal. Second, h(n) needs to be estimated
16
consistently. As long as these conditions hold, A* has the advantage of generating opti-
mum solutions. Unfortunately, the complexity of A* leads to high computation times that
grows exponentially with the input [17]. Fortunately, good heuristics can help reduce this
complexity while still maintaining a high level of optimality.
4.2 Dynamic Sparse A*
The algorithm speci cally used in this paper is called Dynamic Sparse A*, a derivative of
Sparse A* Search (SAS). SAS was originally designed for military ight planning [16]. This
algorithm helped reduce the complexity issues described earlier through some assumption
about ight. One such restriction is the turning angle constraint which limits the grids
considered to only grids in front of the aircraft [16]. Second, the minimum travel distance
limits the grid space even further to only those a certain distance in front of the aircraft [16].
Both of these constraints can be shown by Figure 4.1.
SAS alone provides a large numbers of bene ts to the collision avoidance problem. First,
it provides globally optimal paths based the A* algorithm. Second, time complexity only
scales linearly with the number of aircraft used in this kind of algorithm. Additionally, the
algorithm can be parallelized such that each aircraft can use it?s own processor allowing for
faster calculations. Finally, with a strong heuristic, the A* algorithm can plan paths for
multiple aircraft in a very short amount of time.
In order to improve further on this algorithm, Dynamic Sparse A* Search (DSAS) was
created to strengthen the heuristic used by the algorithm. One of these improvements is
to rst check whether danger is present between an aircraft and its goal. In the case of no
danger, A* searches are unnecessary and the aircraft can simply y a normal path to the
goal. This helps reduce unnecessary computations on the system. The major improvement
and where DSAS gets its name from is the way it can dynamically plan around moving
obstacles such as other aircraft. In order to handle this, DSAS uses grids created through
17
time. Finally, the algorithm is capable of handling changes to the conditions of the airspace
and is capable of handling a high number of aircraft path planning.
These improvements lead to two main components of the algorithm: heuristic generation
and the search. The heuristic calculation is created by rating the danger in each grid space
and through several time steps into the future. This danger grid combines all the aircraft
danger zones into one grid which is then used to generate a path from the aircraft?s location
to its goal.
4.3 Creating a Heuristic
The main way to reduce complexity in A* is by modifying the heuristic function de-
scribed by Equation 4.1. As mentioned before, g(n) is the known component of the calcu-
lation generated by the search whereas h(n) is only an estimation. Since h(n) is the only
unknown, a good heuristic is de ned by this estimation. In order to accurately and e -
ciently perform this algorithm, h(n) must not overestimate this cost while maintaining a
close estimate [17].
4.3.1 Background Concepts
The rst step in performing this algorithm is to create a grid representing the real-world
airspace. Originally, this danger grid was used by Szczerba et al. as a best cost grid [16].
This grid was designed to represent occupied airspace as having an extremely high cost such
that the algorithm wouldn?t attempt to traverse it. In addition to creating this x y best cost
grid, DSAS adds the time component, t, which creates an x y t matrix of best cost values
throughout time. Note that for this problem set, the altitude (third physical dimension) is
not included, but could be included to create an x y z t matrix.
One of the problems described earlier was de ning this grid space dimensionally. In
order to do this, the actual attributes of the physical aircraft came into play. First, the
tests would be conducted on both 500 and 1000 meter square elds. Because of this, the
18
resolution chosen was 10 meters per grid square. Second, the aircraft sends GPS updates
once per second, so the time step chosen was one second with a prediction of 20 seconds
(total of 21 including the present state t = 0) into the future. This means that the total
resolution is 100 by 100 by 21 grid squares. This is best shown by looking at Figure 4.2
which shows the grid from time 0 to 5.
Inside each square is the danger rating which represents the likelihood of encountering
an obstacle in that square. Note that for this experiment, the only obstacles included are
other aircraft but static obstacles could be included by simply making the squares occupied
by that static obstacle have a probability of 100%. This danger rating is related to the best
cost values described earlier and used by Szczerba [16]. In order to calculate this danger
rating, the algorithm predicts the location of each aircraft throughout time and lls in the
danger grid with the appropriate values. However, you don?t want an aircraft to try avoiding
itself so each aircraft is said to \own" a danger grid which doesn?t contain its own danger
ratings.
4.3.2 Predicting Plane Locations
With the grid structure de ned, the next step is determining how to accurately predict
plane locations and how to use that prediction to ll in the danger grid. Predicting the linear
path from one point to another is considered a fairly simple thing to do. As mentioned earlier
though, using a discretized airspace to leads to problems in determining the aircraft?s path.
A straight line in grid space will typically bisect several grid squares in an uneven manner.
Fortunately, this grid bisecting line can be used as a method of calculating the probability
that an aircraft will occupy a grid square in the future. The aircraft?s bearing to the goal
is compared to the closest angle that will perfectly bisect a neighboring square. The o set
between these two allows for predictions of how much time the neighboring square will be
occupied by the aircraft. The remaining percentage is used as a prediction for the next
closest square. These two probabilities are then inserted into the danger grid as shown by
19
Figure 4.1: Turning angle and constant speed constraints on a grid
Figure 4.2: Danger grids through time
20
Figure 4.3: A plane?s predicted path. The yellow path is a predicted turn. The purple
represents the path to an intermediate waypoint and the green is the nal path taken to the
planes goal.
Figure 4.3. This process is repeated until the aircraft is within the grid square containing
the goal.
This system works well for straight lines but not for curves or turns in the aircraft?s
path. In order to compensate for this, the straight line method isn?t used until the goal is
within the maximum turning angle of the aircraft. Once this condition is met, the straight
line algorithm is used to predict the future occupied airspace.
The nal step for lling the danger grid is to create some bu er zones around the
aircraft. First, the danger rating of occupied squares would be the highest value allowed by
the algorithm to represent that under no circumstance should another aircraft attempt to y
through that space. In addition to this central location, bu er zones are added around the
path with lower danger ratings. In front of the aircraft, the danger rating would be higher
because there is a higher likelihood that the aircraft may end up in that grid square, but
squares on the edge of the maximum turning angle may have a lower danger rating. Referring
21
Figure 4.4: Plane bu er zones
to Figure 4.4, the green square is the predicted location of an aircraft in the danger grid at
time t and therefor has the maximum danger rating. Additionally, the purple squares have
danger ratings with the darker purple representing a higher rating because the aircraft is
likely to move into those squares at time t+ 1 or even occupy those squares at time t.
4.3.3 Estimation of Best Cost to Goal
With the danger grid de ned, the last step before the A* search is to de ne the best
cost grid that?s used in the A* search algorithms. Typically, the estimate h(n) is determined
based on the distance to the goal. However, in this case, this estimate will actually be a
function of the danger rating and distance to the goal. Since any airspace using this algorithm
would be reasonably sparse and there are no static obstacles, the entire calculation of h(n)
is done by Equation 4.2. Within this equation, both the distance from the goal and a scaled
danger rating are added together to create the nal best cost estimation.
22
h(n) = (distance from n to the goal) + (danger rating of n) (scaling factor) (4.2)
While this heuristic may seem fairly simple, it works well for a couple of reasons. First,
in a relatively sparse airspace without obstacles, it will always be preferable to perform
the simple navigation around another aircraft rather than to risk a con ict or collision.
Second, by using this method, straight line paths will be preferred as well. Note that if
there are no other aircraft in the airspace, the straight line solution will always be chosen
by DSAS because it is the shortest physical distance. Using this solution also helps handle
the problem of overestimation by making it impossible to overestimate the distance portion
of the heuristic. This helps the algorithm adhere to the A* rules of not overestimating on
the best cost grid. The nal reason to use this heuristic is time complexity. By using this
heuristic, the algorithm complexity is O(xytp) where x and y are the length and width of the
airspace, t is the number of time steps, and p is the number of aircraft in the airspace. Since
x, y, and t are typically chosen beforehand, this means that the best cost grid calculation is
linear with respect to the number of aircraft in the airspace.
4.4 Search with Dynamic Sparse A*
Once the danger grid has been created, DSAS creates a best cost grid that can nally
be used by the A* algorithm to create the path to the goal. In order to create the path, A*
will use the branch-and-bound methodology described earlier [17]. First, the algorithm will
branch out to all adjacent node from the start node and order them into a best cost heap
with the minimum cost at the top. Then, the algorithm bounds the search by looking at
all nodes branching o of the minimum cost node. The algorithm will recursively do this
branch-and-bound technique until it reaches the goal. Note that if the algorithm detects a
23
better solution, it will be found at the top of the heap so at any point during this recursion,
the bounding may determine that a di erent path should be pursued [17].
Unfortunately, this basic A* implementation can create infeasible solutions in certain
scenarios. Consider a situation where is aircraft?s current bearing is due north but the goal
is located south of the aircraft. Given an obstacle free airspace, the traditional A* would
choose a straight line path south which is an impossible maneuver for the aircraft. The
aircraft would need to spend time to turn around and then head south to the goal.
In order to handle this type of situation, the branching in DSAS is limited to nodes
within the maximum turning angle and within the distance that can be travelled in one
time step. This constraint was previously shown in Figure 4.1. Having these two constraints
actually improves the performance of the algorithm by limiting the number of options that
DSAS has to search through when nding the best cost path [16]. The algorithm is further
optimized by checking for the base case of a danger free path to the goal. If this is the case,
the aircraft is simply allowed to y to its goal with interruption from DSAS.
In short, DSAS works by rst creating a danger grid representing the existence of hazards
in the real world. This danger grid is then converted into a best cost grid for each aircraft
that?s composed of both the danger rating and distance to goal of each node. Finally, DSAS
will use the limited branch-and-bound technique until the minimum node on the heap is
the goal or until the maximum time step value is reached. Once the search is complete, the
waypoints are transmitted to keep the aircraft both safe and on track to the destination.
24
Chapter 5
Arti cial Potential Fields
5.1 Simple Arti cial Potential Field (APF) Implementation
The basic idea behind this algorithm is to create arti cial elds of positive and negative
energy around objects in the airspace. Speci cally, all aircraft in the airspace will have
a negative charge associated with their locations and the goal of each aircraft will have a
positive charge on that aircraft [6]. By following the same rules as magnets, the aircraft
(negative charges) will feel repulsive forces from each other and an attractive force from
their respective goals [6]. Each of these forces carries a weight on the nal calculation of the
force vector for that aircraft. Then that vector is used to model the direction the aircraft
should go to travel safely.
These weights were tested in order to de ne the attraction constant. This attraction
constant, , was meant to serve as a method of determining the best weights for the attractive
and repulsive forces [15]. In practice, is used as the weight of the attractive force felt by
the aircraft and is a value such that 0 < < 1. For the full force equation, please refer to
Equation 5.1 [15].
~Ftot = ~Fattr + (1 ) ~Frep (5.1)
In previous research, Sigurd and How used this equation to test this collision avoidance
algorithm as well [15]. They found that the best results were achieved when using = :66
for the attractive weight and .34 for the repulsive weight [15]. This speci c implementation
follows their research in using both the force equation and those numerical weight values.
This formula forms the basis of this arti cial potential eld implementation.
25
5.2 Calculating Force Vectors
With the basic equation for combining attractive and repulsive forces de ned, the algo-
rithm now needs a way to calculate what those force vectors are. First, in the real world,
magnetic forces are based primarily on the distance between the two objects. This force can
be felt at any distance but gets weaker as the distance grows. However, at a certain point,
this force becomes negligible, so this algorithm needs to de ne dfmax, the maximum distance
at which one aircraft?s force can be felt.
In order to de ne this, the constant donesec is used to represent the distance that the
aircraft can travel in one second. In addition to this constant, a scale factor is used to de ne
the ratio of the collision zone to the size of the potential eld. Through experimentation, this
implementation chose to use = 5 in the calculation. Finally, the eld is modi ed to be an
elliptical shape with the force eld extending farther in front of the aircraft to represent where
that aircraft is traveling [7]. To de ne this elliptical shape, two constants, scalef and scaleb,
are used to determine the amount of the elliptical extending in front and behind the aircraft.
These values were experimentally chosen to be scalef = 2 and scaleb = 1:25. Finally,
represents the angle from the bearing of one aircraft, Uk, to the position of the current
aircraft, Ui. These values are then used in Equation 5.2 to solve for the maximum distance
that would allow an aircraft to e ect the current aircraft?s repulsive force calculation. Figure
5.1 represents the force eld around an aircraft that may be included in this calculation.
dfmax = donesec[( scalef ( scalef scaleb)=2)] + (( scalef scaleb)=2) cos( )) (5.2)
Similar to the maximum distance described above, there also needs to be a minimum
distance, dfailsafe, that will modify our force value to represent an impending collision [7]. If
the distance between two aircraft becomes less than this minimum distance, both will feel a
strong force, Ffailsafe, that overpowers all other forces in the calculation.
26
Figure 5.1: Determining the maximum distance of the repulsive force eld
Finally, the last range to be covered is when dfailsafe
><
>>
:
0 ifd>dfmax
qUAV [(kemitf (kemitf kemitb)=2)]+((kemitf kemitb)=2) cos( )dfmax d ifdfailsafe