Time Dependent Queuing Systems
Except where reference is made to the work of others, the work described in this thesis is
my own or was done in collaboration with my advisory committee. This thesis does not
include proprietary or classi ed information.
Allen E. Flick
Certi cate of Approval:
Bertram Zinner
Associate Professor
Mathematics & Statistics
Ming Liao, Chair
Professor
Mathematics & Statistics
Amnon J. Meir
Professor
Mathematics & Statistics
Geraldo S. DeSouza
Professor
Mathematics & Statistics
George T. Flowers
Graduate School Dean
Time Dependent Queuing Systems
Allen E. Flick
A Thesis
Submitted to
the Graduate Faculty of
Auburn University
in Partial Ful llment of the
Requirements for the
Degree of
Master of Science
Auburn, Alabama
December 19, 2008
Time Dependent Queuing Systems
Allen E. Flick
Permission is granted to Auburn University to make copies of this thesis at its
discretion, upon the request of individuals or institutions and at
their expense. The author reserves all publication rights.
Signature of Author
Date of Graduation
iii
Vita
Allen E. Flick was born to William and Susan Flick on March 19th, 1981 in Cleveland,
Ohio. Shortly thereafter, his family moved to Birmingham, Alabama, where he graduated
from Vestavia Hills High School in 1999. After working for a while, he decided to try college
and took a few classes at the University of Alabama at Birmingham. He then opted to go to
school full time and moved to Auburn, Alabama, where he graduated Summa Cum Laude
from Auburn University with a Bachelor of Science in Applied Mathematics in August,
2006. He began the Graduate program in Mathematics at Auburn University two weeks
later.
iv
Thesis Abstract
Time Dependent Queuing Systems
Allen E. Flick
Master of Science, December 19, 2008
(B.S., Auburn University, 2006)
69 Typed Pages
Directed by Ming Liao
Using elementary probability theory, we establish limiting probabilities for the queue
length of a queuing system whose arrivals follow a nonhomogeneous Poisson process and
are served by a single server whose services also follow a nonhomogeneous Poisson process.
We uniformly accelerate the process and conclude, under a special stability condition, that
the queue length distribution is the same as a queue with constant rates. Extensions are
provided for queues with multiple homogeneous servers and those with a nite capacity.
Also included is a simulation of such a queuing system using the data from an Auburn
University web server to model the arrival process.
v
Acknowledgments
The author would like to rst thank his adviser, Professor Liao, for his unlimited time
and patience this last year. He would also like to thank the rest of his committee for their
advice and support. He thanks all his professors at Auburn University, who have opened
his eyes to the amazing world of mathematics. And last, but not least, the author would
like to thank his family and friends for their unending support.
vi
Style manual or journal used Journal of Approximation Theory (together with the style
known as \aums"). Bibliograpy follows van Leunen?s A Handbook for Scholars.
Computer software used The document preparation package TEX (speci cally LATEX)
together with the departmental style- le aums.sty.
vii
Table of Contents
List of Figures ix
1 Introduction 1
2 Preliminary Information 3
2.1 Basic Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Basic Queueing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 M(t)=M(t)=1 Queues 9
3.1 De nition and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 The Principle of Stochastic Dominance . . . . . . . . . . . . . . . . . . . . . 10
3.3 Queue Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Extensions 22
4.1 M(t)=M(t)=k(t) Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Finite Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Simulation 25
6 Conclusion 34
Bibliography 35
Appendices 36
A Approximating Rates with Step Functions 37
B Simulation Code 39
viii
List of Figures
3.1 An Example of Delaying Arrivals . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1 Arrival and Service rates and (t) . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 (t) < 1 without "-acceleration . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Calculated Probabilities without "-acceleration . . . . . . . . . . . . . . . . 28
5.4 "-accelerated with (t) < 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.5 Calculated Probabilities with "-acceleration . . . . . . . . . . . . . . . . . . 30
5.6 (t) > 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7 (t) = 1 without "-acceleration . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.8 "-accelerated with (t) = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ix
Chapter 1
Introduction
This purpose of this thesis is to provide an intuitive probabilistic approach to time
dependent queuing systems. It is inspired by William A. Massey?s paper, "Asymptotic
Analysis of the Time Dependent M/M/1 Queue"[3]. In his work, Massey takes an analytic
approach to such processes; which, while very precise and rigorous, is di cult for someone
not intimately familiar with higher mathematics to understand. But the main result is the
same: if we "-accelerate the process, that is make the arrival and service rates arbitrarily
large, then the time dependent M(t)=M(t)=1 queue acts just like a constant rate M=M=1
queue as time goes to in nity, although with a di erent requirement for stability. Instead
of needing a larger service rate than arrival rate, the stability requirement for any time t is
(t) < 1. This means, as we will discuss in Chapter 3, that the server, on average, has time
to catch up at time t. So we don?t necessarily always need the service rate larger than the
arrival rate, we just need the amount of possible service before time t to be able to handle
the possible arrivals before time t.
Since this paper was written for a broader audience, the next chapter is a summary of
some basic ideas in probability and queuing theory. It starts with a discussion of some of
the main ideas in probability, then moves on to discuss Poisson processes and constant rate
queuing systems. In Chapter 3, we present our main results about time-varying queuing
systems, Theorems 3 and 4. They show that the time dependent queue is very similar to
it?s constant rate counterpart. It will begin with a de nition of an M(t)=M(t)=1 queue
and a description of the very special quantity, , before moving on to the main proofs.
1
Chapter 4 includes two extensions of the main result: time dependent queuing systems
with multiple servers and systems with a nite capacity. The last chapter is a simulation
of a time dependent queuing system, a model of an Auburn University web server.
2
Chapter 2
Preliminary Information
This chapter is a collection of preliminary information necessary for the understanding
of the proofs in the later chapters. It is provided so that those with only very basic knowledge
of probability and stochastic processes can understand this paper. The bulk of this chapter
can be found in the works of Billingsley [1], Ross [4], and Gross and Harris [2].
2.1 Basic Probability Theory
Some basic notation and de nitions:
P[A] is the probability that some event A will occur;
0 P[A] 1 for any event A:
Ac is the event that is everything in the sample space except for whatever is in A;
P[Ac] = 1 P[A]:
P[AjB] is the probability that A will occur given B has occurred.
The Law of Total Probability: For any event A in some sample space ,
P[A] =
X
n
P[AjBn]P[Bn];
where fBng is a countable partition of .
3
E[X] is the expected value of a random variable X; it is the average value of X.
Convergence in Distribution [1]: A sequence of random variables Xn is said to
converge in distribution to a random variable X if
P[Xn >x]!P[X >x] as n!1 , whenever P[X = x] = 0:
We say Xn!X, in distribution.
Almost Sure Convergence [1]: A sequence of random variables Xn is said to converge
almost surely to a random variable X if
P[Xn!X] = 1:
The Strong Law of Large Numbers (SLLN)[1]: If Xn is an independently and
identically distributed (iid) sequence of random variables, each with a nite mean, then
Pn
i=1Xi
n !E[X1] , almost surely, as n!1:
Poisson Distribution [1]: If X is a Poisson distributed random variable with mean
> 0, then
P[X = k] = e
k
k! for k = 0;1;2::: , and E[X] = :
Exponential Distribution [4]: If X is an exponentially distributed random variable
with rate > 0, then
P[X >x] =
Z 1
x
e tdt = e x for x 0 , and E[X] = 1 :
4
Lack of Memory Property [4]: X is an exponentially distributed random variable if
and only if it does not assume negative values and
P[X >t+sjX >t] = P[X >s]:
2.2 Poisson Processes
A stochastic process is a collection of random variables indexed by time [4]. With a
stochastic process X, for every time index t, X(t) is a random variable that represents the
state of the process at time t.
A stochastic process N(t) is a counting process [4] if it represents the total number of
events that have occurred up to time t, and therefore must satisfy
(i) N(t) 0
(ii) N(t) is integer valued
(iii) If s 0 if
5
(i) N(0) = 0
(ii) The process has stationary and independent increments
(iii) The number of events in any interval of length t is Poisson distributed with mean t.
So for all s;t 0,
P[N(t+s) N(s) = n] = e t( t)
n
n! for n = 0;1;:::
The interarrival times, or the times between successive arrivals, of a Poisson process N(t)
with rate , are iid exponential random variables of rate [4].
The counting process N(t) is a nonhomogeneous Poisson Process [4] with rate function
(t) if
(i) N(0) = 0
(ii) The process has independent increments
(iii) For s 1, then the queue length tends to in nity as t !1 [2]. This probability is also
equal to the long run fraction of time that X is in state n, or that the queue is n customers
long, almost surely. The set of all these probabilities for n = 0;1;::: is called the stationary
distribution of X [2].
Note that since p0 > 0, then the queue length will be zero at some time as t!1,
almost surely. So if is the rst time s that X(s) = 0, then P[
><
>>:
n
n! np0 for 1 n k
n
kn kk! np0 for n k
where p0 =
"
1
k!
k k
k
+
k 1X
n=0
1
n!
n# 1
:
An M=M=k=N queue is an M=M=k queue with a nite capacity N. Let X(t) be an
M=M=k=N queue with arrival rate , k servers each with service rate , and capacity N.
If X(t) N, then the server is full and can?t accept any more arrivals, so the service rate
7
at that point is zero. Since we have a nite number of states, then there is a steady state
distribution regardless of the arrival and service rates. So as t!1 we have
P[X(t) = n]!pn =
8>
><
>>:
n
n! np0 for 0 n
>>>>
<
>>>>
>:
"
1
k!
k 1 ( =k )N k+1
1 =k +
k 1X
n=0
1
n!
n# 1
for k 6= 1
"
1
k!
k
(N k + 1) +
k 1X
n=0
1
n!
n# 1
for k = 1
;
from [2].
8
Chapter 3
M(t)=M(t)=1 Queues
3.1 De nition and
Let X(t) be the queue length of an M(t)=M(t)=1 Queue with initial state X(t0) = X0.
Arrivals follow a non-homogeneous poisson process of rate (t), and requests by each arrival
are i.i.d. exponential random variables of mean 1. The single server works at a rate of (t),
so Rba (t) dt is the amount of work the server can do from time a to time b, or the expected
number of services the server can handle in (a;b). The tra c intensity is (t) = (t)= (t).
And de ne
(t) := (t; ; ) = sup
t 2(t0;t)
Rt
t (s)dsR
t
t (s)ds
:
represents whether the server, on average, can handle the arrivals or not. If (t) < 1,
then for any time s 1,
at some point the arrivals are so frequent that the server can?t catch up by time t. The case
where (t) = 1 is beyond the scope of this paper, but is covered thoroughly by Massey [3].
Throughout this paper, we use the concept of uniform acceleration, as did Massey [3].
The idea of this acceleration, which we call "-accelerating a process, is to divide both the
arrival rate and the service rate by some small "> 0. Then as "!0, the number of arrivals
and completed services in any time interval is arbitrarily large. This has the same e ect
on a constant M=M=1 queue as letting time approach in nity. In our case, it allows us to
9
consider the stability at time t, since we can see how the system reacts with an arbitrarily
large number of arrivals and services. If X(t) is an M(t)=M(t)=1 Queue with arrival rate
(t) and service rate (t), then the "-accelerated process X"(t) is an M(t)=M(t)=1 Queue
with arrival rate (t)=" and service rate (t)=". Since this doesn?t change (t), we can use
(t) make judgements about the system?s stability.
3.2 The Principle of Stochastic Dominance
For M(t)=M(t)=1 Queues X(t) and Y(t) with arrival rates X(t) and Y (t) and service
rates X(t) and Y (t), if X(t) Y (t) and X(t) Y (t), then
P[X"(t) n] P[Y"(t) n]:
A rigorous proof of this fact by a purely analytic method is included in Section 10 of
Massey?s paper [3]. Here, we provide an informal probabilistic argument.
Assume X(t) Y (t) and X(t) = Y (t), then we can consider the arrival process
of Y(t) to have rate function Y (t) = X(t) + a(t), where a(t) = Y (t) X(t). We are
essentially adding an arrival stream to X(t) with a mean number of arrivals Rtt0 a(s)ds;
while the server is working at the same rate, it has more requests to handle. We may allow
the server at Y(t) to handle arrivals to X(t) in the queue before additional arrivals due to
a(t). This will not change the queue length, so the queue whose length is represented by
Y(t) is at least as congested at any time t as the queue whose length is represented by X(t).
Thus P[X"(t) n] P[Y"(t) n].
Likewise, if X(t) Y (t) and X(t) = Y (t), we can consider the rate of the process
that represents the possible service of X(t) to be X(t) = Y (t) +b(t), where
10
b(t) = X(t) Y (t). Therefore the server whose queue length is represented by X(t) can
work at at faster rate. It can handle a mean number of requests Rtt0 b(s)ds more than the
queue whose length is represented by Y(t). So the server in X(t) is at most as congested
as the server in Y(t) and we have P[X"(t) n] P[Y"(t) n].
3.3 Queue Length
In the following proofs, we will assume the functions are left-continuous. This is not as
much a requirement for correctness as it is an ease of notation. Without this assumption,
the proofs still hold; we would simply need to consider the left-hand limit of the function
or process wherever we consider the function or process at some time.
Theorem 1 Fix t>h> 0. If and are left-continuous piecewise-constant positive real
functions of time, with (s) < (s) for all s 2 [t0;t], then if we consider the uniformly
accelerated M(t)=M(t)=1 Queue X"(t) with initial state X"(t0) = X0, arrival rate (t)=",
and service rate (t)=", we have
P[X"(t) = n]!( (t))n(1 (t)) as "!0
and P[ "]!1 as "!0;
where " is the event that X"(s) = 0 for some s2(t h;t), h> 0.
There exists a partition of [t0;t] with t0 0 is small enough so that (tN 1) < 1. Since
(tN 1) < 1, then N 1 < N 1. Then there is some > 0 such that
(tN 1) < 1
,
Z tN 1
v
(s)ds< (1 )
Z tN 1
v
(s)ds , for all v2[t0;tN 1)
,
N 1X
j=i
jhj < (1 )
N 1X
j=i
jhj , for all 1 i 1, then X"(t)!1as "!0, in distribution.
Assume (t) > 1, then there exists v2[t0;t) such that
Z t
v
(s)ds>
Z t
v
(s)ds:
Consider the "-accelerated system X"(t) with arrival rate (t)=" and service rate (t)=",
and let " = 1n.
Then we have a nonhomogeneous poisson arrival process on the interval (v;t], say
A"(t), of rate (t)=" = n (t). And the number of services the server can complete is a
nonhomogeneous poisson process, say S"(t) of rate (t)=" = n (t). Note that this is not
necessarily the actual amount of service done, just the amount possible; the queue may be
empty at some point, in which case the server isn?t actually doing the work that it could
be. This means that in the sense of distribution we can think of these processes at time t
as the sum of n i.i.d Poisson random variables, A"(t) = A1 + A2 + + An 1 + An and
S"(t) = S1 +S2 + +Sn 1 +Sn, where Ai(t) has a mean of Rtv (s)ds and Si(t) has mean
Rt
v (t)ds for 1 i n. Since S
"(t) is the number of services that can be completed, the
queue length is at least A"(t) S"(t). So we have
X"(t)
nX
i=0
Ai(t)
nX
i=0
Si(t)
)X
"(t)
n
Pn
i=0Ai(t)
n
Pn
i=0Si(t)
n :
20
By the Strong Law of Large Numbers,
1
n
nX
i=0
Ai(t)!E[A1(t)] and 1n
nX
i=0
Si(t)!E[S1(t)] as n!1:
Therefore
limn!1X
"(t)
n E[A1(t)] E[S1(t)] =
Z t
v
(s)ds
Z t
v
(t)ds> 0
and
X"(t)!1 as " = 1n!0:
21
Chapter 4
Extensions
4.1 M(t)=M(t)=k(t) Queues
Now consider a queuing system with multiple homogeneous servers, where the number
of available servers at any time t is given by a positive integer-valued step function k(t). Let
X(t) be the queue length process with arrival rate (t) and with k(t) servers each working
at a rate of (t); where (t) and (t) are piecewise continuous functions of time. The
tra c intensity is now (t) = (t)=(k(t) (t)). For the multiple server case, we rede ne our
stability function as
(t) := (t; ; ;k) = sup
t 2(t0;t)
Rt
t (s)dsR
t
t k(s) (s)ds
If (t) > 1, we again have X"(t)!1 as "!0, in distribution. The proof of this is
exactly like that of Theorem 4, but using our new .
If (t) < 1, then the limiting probability in the multiple server case is once again the
same as it?s constant counterpart; the proof only requires a few simple changes. First, we
can still approximate and with step functions to make new processes X+(t) and X (t),
each with (t) < 1, such that P[X "(t) = n] P[X"(t) = n] P[X+"(t) = n] as in
Theorem 3. Then for each of these processes, as in Theorem 2, we partition the interval
[t0;t] making , , and k constant on each subinterval Ii. So we have
Z tN 1
v
(s)ds< (1 )
Z tN 1
v
k(s) (s)ds , for all v2[t0;tN 1)
22
,
N 1X
j=i
jhj < (1 )
N 1X
j=i
kj jhj , for all 1 i
><
>>:
1
n!
(t)
(t)
n
p0 for 1 n k(t)
1
(k(t))n k(t)k(t)!
(t)
(t)
n
p0 for n k(t);
where
p0 =
2
4 1k(t)!
(t)
(t)
k(t) k(t) (t)
k(t) (t) (t)
+
k(t) 1X
n=0
1
n!
(t)
(t)
n3
5
1
:
4.2 Finite Capacity
For the nite capacity case, there is always a limiting probability, since there are only
nitely many states. We don?t even need a concept of stability, though is still a helpful
quantity for the analysis of the queue; if > 1, then the queue will spend much more time
23
at capacity or near it. Once again, the limiting probabilities are the same as in the nite
case, and the proof consists of simple changes to the previous analysis.
Let X(t) be the length of a queue with a maximum capacity of N, arrival rate (t),
and k(t) servers each working at a rate of (t); where (t) and (t) are piecewise continuous
functions of time and k(t) is a positive integer-valued step function. Then we call X(t) an
M(t)=M(t)=k(t)=N queue. To nd the steady state distribution of X(t), we once again
"-accelerate the process. While it may seem problematic to let the arrival rate approach
in nity when we have a nite capacity, it is not, since we also let the service rate approach
in nity at the same pace. As in Theorem 3, approximate and with step functions to make
the processes X+(t) and X (t) such that P[X "(t) = n] P[X"(t) = n] P[X+"(t) = n].
Then on each interval Ii these processes are a constant M=M=k=N queue with arrival rate
i, service rate i, ki servers, and capacity N. As we "-accelerate these processes, the
constant M=M=k=N queues approach steady state. We then limit our approximations of
and and squeeze their probabilities to the M=M=k=N queue on the last interval, with
arrival rate (t)=", k(t) servers with rate rate (t)=", and capacity N. So as "!0,
P[X"(t) = n]!pn =
8
>><
>>:
1
n!
(t)
(t)
n
p0 for 1 n k(t)
1
k(t)n k(t)k(t)!
(t)
(t)
n
p0 for k(t) n N;
where
p0 =
8
>>>>
>>><
>>>
>>>>
:
2
64 1
k(t)!
(t)
(t)
k(t) 1 (t)
k(t) (t)
N k+1
1 (t)k(t) (t)
+
k(t) 1X
n=0
1
n!
(t)
(t)
n37
5
1
for (t)k(t) (t) 6= 1
2
4 1k(t)!
(t)
(t)
k(t)
(N k(t) + 1) +
k(t) 1X
n=0
1
n!
(t)
(t)
n3
5
1
for (t)k(t) (t) = 1:
24
Chapter 5
Simulation
In this nal chapter, we model a time dependent queuing system using data from an
Auburn University web server. The arrival process for our model is taken from the server?s
logs. The number of requests per hour from ve consecutive Wednesdays was averaged
and used to make an arrival process for our queue. For an M=M=1 Queue in steady state,
the distribution of the departure process is the same as the arrival process, so we assume
this approximation will be a valid model of the arrival process whether the logs record the
request at the time of arrival or after the request has been completed. The service rate in
our model is kept constant for a more controlled analysis of the data, which is reasonable
since a computer can usually work at a close to steady rate. The service rate of the actual
web server will most likely be much higher, and therefore avoid huge backups, but then
we could not show what happens when 1. The reason this approach to studying an
M(t)=M(t)=1 queue is so applicable to a web server is that such a server already has a
very high arrival and service rate, and can essentially be considered "-accelerated without
actually changing the rates. In this model, we assume that individual arrivals to a server
are independent exponential random variables, as is each request made by those arrivals.
We also assume the server has some way of keeping arrivals in a queue, that it does not
simply reject another arrival when it is busy. This analysis will show the importance of
as a stability condition. Figure 5.1 illustrates the service and arrival rates of this system
and (t) at every time t.
25
0 5 10 15 200
1
2
3
4
5
6
7
8x 104
time
rate
Web Server Simulation Arrival and Service Rates
Arrival RateService Rate
0 5 10 15 200
0.2
0.4
0.6
0.8
1
1.2
time
rho*
Web Server Simulation Stability Condition
Figure 5.1: Arrival and Service rates and (t)
In order to simulate the queue, we use step functions for the arrival and service rate
function. Then on each interval, we simulate an M=M=1 queue by simulating exponential
random variables for the interarrival times and required services. One can simulate any
continuous random variable X with distribution function F(x) by generating a random
number U, which is uniformly distributed on (0;1), and setting X = F 1(U), as shown in
Chapter 7 of Solomon [5]. Thus, we simulate our exponential random variables by letting
X = 1 lnU; where is the rate parameter of the random variable and U is a random
number uniformly distributed on (0;1).
First consider Figure 5.2, the case where (t) < 1 and the queue is stable. Notice
where the queue length climbs to a point that makes the length of the queue at earlier
times seem insigni cant. This is when (t) > 1; but when (t) < 1, the server catches up
and brings the queue back down to manageable size. You can even notice that the queue
seems to jump around quite a bit right before time 10. This is where (t) = 1, or close
enough so that we can consider it 1.
26
8 10 12 14 16 18 20
0.6
0.8
1
time
rho*
Web Server Simulation Stability Condition
8 10 12 14 16 18 200
0.5
1
1.5
2
2.5
3
3.5
4
x 104 Queue Length with rho* = 0.9708, epsilon = 1.000
Time ? t
# of Requests in Server ? X(t)
Figure 5.2: (t) < 1 without "-acceleration
27
With e p s i l o n = 1.0000000 , the system c l e a r e d out
at 21.00155 a f t e r having a maximum queue length
of 43555. Ran 200 times without a c c e l e r a t i o n .
Calculated Long Run Fractions of Time / P r o b a b i l i t i e s
That is , P[X = n ] f o r each s t a t e below .
n Theoretical Accelerated Multiple Runs
0 0.20213661 0.19759187 0.24000000
1 0.16127740 0.15603091 0.15500000
2 0.12867733 0.12492567 0.10000000
3 0.10266693 0.10043291 0.10500000
4 0.08191419 0.08225265 0.08500000
5 0.06535633 0.06666348 0.07500000
6 0.05214542 0.05503601 0.04500000
7 0.04160493 0.04583023 0.03500000
8 0.03319505 0.03707380 0.00500000
9 0.02648511 0.02673086 0.02500000
10 0.02113150 0.01998091 0.03000000
11 0.01686005 0.01477285 0.01000000
12 0.01345202 0.01334929 0.01500000
13 0.01073287 0.01170998 0.02500000
14 0.00856337 0.00924480 0.01000000
15 0.00683240 0.00708840 0.01000000
16 0.00545132 0.00567585 0.01000000
17 0.00434941 0.00510566 0.00000000
18 0.00347023 0.00446735 0.00000000
19 0.00276877 0.00395070 0.00000000
Figure 5.3: Calculated Probabilities without "-acceleration
Figure 5.3 shows the theoretical probability lim"!0P[X"(t) = n] for n = 0 to n = 19, the
fraction of time the simulated queue is in these rst 20 states after clearing itself out on
the last interval, and the fraction of times these states occurred when the system was run
multiple times.
We can now see in Figures 5.4 and 5.5 that the "-acceleration does not drastically
change the calculated probabilities nor the shape of the graph.
28
8 10 12 14 16 18 20
0.6
0.8
1
time
rho*
Web Server Simulation Stability Condition
8 10 12 14 16 18 200
1
2
3
4
5
6
7
8
x 105 Queue Length with rho* = 0.9708, epsilon = 0.052
Time ? t
# of Requests in Server ? X(t)
Figure 5.4: "-accelerated with (t) < 1
29
With e p s i l o n = 0.0077808 , the system c l e a r e d out
at 21.00002 a f t e r having a maximum queue length
of 5617345. Ran 200 times without a c c e l e r a t i o n .
Calculated Long Run Fractions of Time / P r o b a b i l i t i e s
That is , P[X = n ] f o r each s t a t e below .
n Theoretical Accelerated Multiple Runs
0 0.20213661 0.20254598 0.21500000
1 0.16127740 0.16204405 0.15000000
2 0.12867733 0.12932871 0.11500000
3 0.10266693 0.10298326 0.13500000
4 0.08191419 0.08213674 0.04500000
5 0.06535633 0.06534474 0.07500000
6 0.05214542 0.05210977 0.07000000
7 0.04160493 0.04149426 0.03500000
8 0.03319505 0.03288973 0.03000000
9 0.02648511 0.02620844 0.02500000
10 0.02113150 0.02083914 0.02000000
11 0.01686005 0.01665043 0.01500000
12 0.01345202 0.01322316 0.01500000
13 0.01073287 0.01064279 0.01500000
14 0.00856337 0.00850770 0.00000000
15 0.00683240 0.00668275 0.01000000
16 0.00545132 0.00532815 0.00500000
17 0.00434941 0.00431330 0.00000000
18 0.00347023 0.00340835 0.00500000
19 0.00276877 0.00270998 0.00000000
Figure 5.5: Calculated Probabilities with "-acceleration
30
7 7.5 8 8.5 9 9.5 10 10.5
0.6
0.8
1
time
rho*
Web Server Simulation Stability Condition
7 7.5 8 8.5 9 9.5 10 10.50
500
1000
1500
2000
2500
3000
3500
4000
4500
Queue Length with rho* = 1.1424, epsilon = 1.000
Time ? t
# of Requests in Server ? X(t)
7 7.5 8 8.5 9 9.5 10 10.50
0.5
1
1.5
2
2.5
3
3.5
4
4.5x 10
5 Queue Length with rho* = 1.1424, epsilon = 0.009
Time ? t
# of Requests in Server ? X(t)
Figure 5.6: (t) > 1
For (t) > 1 we can see that the queue length is getting arbitrarily large. In Figure
5.6, once again the "-accelerated version looks the same as the original system, due to it?s
high arrival and service rates. You can also see that on the interval from 9 to 10, where
(t) = 1, the queue is very erratic.
When (t) = 1, or as close as we can get it numerically, the queue length varies wildly.
It is very erratic and unpredictable, as evidenced by the four graphs in Figure 5.7. In Figure
5.8, we see the same phenomenon for the "-accelerated case.
31
8.6 8.8 9 9.2 9.4 9.6 9.80.7
0.8
0.9
1
1.1
time
rho*
Web Server Simulation Stability Condition
8.6 8.8 9 9.2 9.4 9.6 9.80
20
40
60
80
100
120
140
160
180
200
Queue Length with rho* = 1.0000000000000, epsilon = 1.00000000
Time ? t
# of Requests in Server ? X(t)
8.6 8.8 9 9.2 9.4 9.6 9.80
50
100
150
200
250
300
Queue Length with rho* = 1.0000000000000, epsilon = 1.00000000
Time ? t
# of Requests in Server ? X(t)
8.6 8.8 9 9.2 9.4 9.6 9.80
50
100
150
200
250
300
Queue Length with rho* = 1.0000000000000, epsilon = 1.00000000
Time ? t
# of Requests in Server ? X(t)
8.6 8.8 9 9.2 9.4 9.6 9.80
100
200
300
400
500
600
700
800Queue Length with rho* = 1.0000000000000, epsilon = 1.00000000
Time ? t
# of Requests in Server ? X(t)
Figure 5.7: (t) = 1 without "-acceleration
32
8.6 8.8 9 9.2 9.4 9.6 9.80.7
0.8
0.9
1
1.1
time
rho*
Web Server Simulation Stability Condition
8.6 8.8 9 9.2 9.4 9.6 9.80
500
1000
1500
2000
2500
3000
Queue Length with rho* = 1.0000000000000, epsilon = 0.00600000
Time ? t
# of Requests in Server ? X(t)
8.6 8.8 9 9.2 9.4 9.6 9.80
500
1000
1500
2000
2500
3000
3500
4000
4500
Queue Length with rho* = 1.0000000000000, epsilon = 0.00600000
Time ? t
# of Requests in Server ? X(t)
Figure 5.8: "-accelerated with (t) = 1
33
Chapter 6
Conclusion
We have shown that under the special stability function (t), the distribution of the
queue length of a "-accelerated M(t)=M(t)=1 queue is the same as a queue length of a
constant rate M=M=1 queue in a steady state. That is, when (t) < 1,
P[X"(t) = n]!( (t))n(1 (t)) as "!0;
and when (t) > 1,
X"(t)!1 as "!0:
We did not consider the case when (t) = 1, though an analysis of it can be found in
Massey?s work [3]. We were able to extend our results to time dependent queueing systems
with multiple homogeneous servers and nite capacities, which is not presented by Massey.
Our simulation clearly supports our theory, shows an applied use of this paper, and even
gives some empirical insight into the case with (t) = 1.
34
Bibliography
[1] Patrick Billingsley, "Probability and Measure," Second Edition, Wiley, 1986.
[2] Donald Gross and Carl M. Harris, \Fundamentals of Queueing Theory," Second Edi-
tion, Wiley, 1985.
[3] William Massey, \Asymptotic Analysis of the Time Dependent M/M/1 Queue," Math-
ematics of Operations Research, Vol. 10, No. 2, 1985.
[4] Sheldon M. Ross, \Stochastic Processes," Wiley, 1983.
[5] Frederick Solomon, \Probability and Stochastic Processes," Prentice-Hall, 1987.
35
Appendices
36
Appendix A
Approximating Rates with Step Functions
Let (t) and (t) be left-continuous positive piecewise functions of time on the interval [t0;t]
with (t; ; ) < 1. Then there exists positive step functions +, , +, and such that
(s) (s) +(s) and (s) (s) +(s) for all s2 [t0;t], and (t; ; +) < 1
and (t; ; +) < 1.
This is obviously true for and +, since then (t; ; +) (t; ; ) < 1.
The other case requires a bit more work. Since is positive, then its average value is
positive. Hence there is some 0 > 0 such that
1
t t
Z t
t
(s)ds> 0 , for all t 2[t0;t):
Note that
(t; ; ) = sup
t 2(t0;t)
Rt
t (s)dsR
t
t (s)ds
= sup
t 2(t0;t)
1
t t
Rt
t (s)ds
1
t t
Rt
t (s)ds
:
Since (t; ; ) < 1, we can choose 0 < 1 < 0 such that
(t; ; ) + 1
0
< 1;
and pick + such that
sup
t 2(t0;t)
( +(t ) (t )) < 1:
37
Then
(t; +; ) = sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds
sup
t 2(t0;t)
1
t t
Rt
t (s)ds+ 1
1
t t
Rt
t (s)ds
sup
t 2(t0;t)
1
t t
Rt
t (s)ds
1
t t
Rt
t (s)ds
+ 11
t t
Rt
t (s)ds
sup
t 2(t0;t)
1
t t
Rt
t (s)ds
1
t t
Rt
t (s)ds
+ 1
0
(t; ; ) + 1
0
< 1:
Now choose 2 > 0 so that
(t; +; ) < 1 2
0
;
then
(t; +; )
1
1 2 0
!
< 1;
so choose so that
sup
t 2(t0;t)
( (t ) (t )) < 2:
Now
(t; +; ) = sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds
sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds 2
sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds
"
1 21
t t
Rt
t (s)ds
#
sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds
1 2
0
sup
t 2(t0;t)
1
t t
Rt
t
+(s)ds
1
t t
Rt
t (s)ds
!
1
1 2 0 =
(t; +; )
1
1 2 0
!
< 1:
38
Appendix B
Simulation Code
% queuingsystemgraph .m
% Allen Flick Master ? s Thesis Queuing System Simulation
% Simulates and Plots a Queuing system as follows :
% Customers arrive according to a non homogeneous poisson process
% of rate ?arate ? , and are served with rate ? srate ? , both of which
% are piecewise constant functions of time .
function queuingsystemgraph ( i n i t i a l S i z e , arate , srate , timeint , . . .
startTime , endTime , e p s i l o n )
%default values
i f nargin < 1
arate = [ 7 . 5 , 3 ] ;
s r a t e = [ 5 , 6 ] ;
timeint = [ 0 , 1 , 2 ] ;
startTime=timeint ( 1 ) ;
endTime=timeint (end ) ;
i n i t i a l S i z e =5;
e p s i l o n =1;
end
e t o l =.0000001;
j =1;
39
while ( timeint ( j )etol , j=j 1; end
l a s t I n t=length ( timeint ) 1;
while ( timeint ( l a s t I n t )>endTime+e t o l )
l a s t I n t=l a s t I n t 1;
end
f i r s t e p s i l o n = e p s i l o n ;
%calculate rhostar
expArrivals =0;
expService =0;
rhostar =0;
expArrivals= arate ( l a s t I n t ) ( endTime timeint ( l a s t I n t ) ) ;
expService= s r a t e ( l a s t I n t ) ( endTime timeint ( l a s t I n t ) ) ;
i f expService >0, rhostar = expArrivals / expService ; end
for m=l a s t I n t 1: 1: j+1
expArrivals= expArrivals + arate (m) ( timeint (m+1) timeint (m) ) ;
expService= expService + s r a t e (m) ( timeint (m+1) timeint (m) ) ;
i f expArrivals / expService > rhostar
rhostar = expArrivals / expService ;
end
end
expArrivals=expArrivals + arate ( j ) ( timeint ( j +1) startTime ) ;
expService=expService + s r a t e ( j ) ( timeint ( j +1) startTime ) ;
40
i f expArrivals / expService > rhostar
rhostar = expArrivals / expService ;
end
% simple memory checking
syscheck = memory ;
f i r s t = syscheck . MaxPossibleArrayBytes /8 10000;
maxmem = syscheck . MemAvailableAllArrays /8 1000000;
i f 8 f i r s t > maxmem, f i r s t = maxmem/8; end
i f f i r s t < ( expArrivals+expService )/ e p s i l o n
e p s i l o n =( expArrivals+expService )/ f i r s t ;
end
timeData=zeros (1 , ceil ( ( expArrivals+expService )/ e p s i l o n ) ) ;
sizeData=zeros (1 , ceil ( ( expArrivals+expService )/ e p s i l o n ) ) ;
i f epsilon>f i r s t e p s i l o n
sprintf ( ?Memory l i m i t s r e q u i r e e p s i l o n to change to %d ? . . .
, e p s i l o n ) ; end
top=i n i t i a l S i z e ;
time=startTime ;
f u l l t i m e =0;
timeData (1) = time ;
sizeData (1) = i n i t i a l S i z e ;
n e x t s e r v i c e= e p s i l o n / s r a t e ( j ) log (rand ) ;
% if the service doesn ? t finish before the interval
41
k=j +1;
while ( timeint ( k)timeint ( k) time )
n e x t s e r v i c e=timeint ( k) time + . . .
( ( timeint ( k) time )/ nextservice 1) e p s i l o n / s r a t e ( k ) . . .
log (rand ) ;
k=k+1;
end
n e x t a r r i v a l= e p s i l o n / arate ( j ) log (rand ) ;
i =2;
%the simulation runs until the maximum time specified
while ( time < endTime e t o l )
%case : the current service finishes before the next arrival
i f n e x t s e r v i c e < n e x t a r r i v a l
%if no customers are in the queue , no service can happen
i f sizeData ( i 1)<1 e t o l
%so instead , calculate the arrival
time=time+n e x t a r r i v a l ;
timeData ( i )=time ;
sizeData ( i )=1;
n e x t a r r i v a l= e p s i l o n / arate ( j ) log (rand ) ;
%calculate a new service time
n e x t s e r v i c e= e p s i l o n / s r a t e ( j ) log (rand ) ;
% if the service doesn ? t finish before the interval ,
42
% let the server run at the rate of the interval , then
% recalculate the rate for any after that
k=j +1;
while ( timeint ( k)< endTime e t o l ) &&...
( n e x t s e r v i c e > timeint ( k) time )
n e x t s e r v i c e = timeint ( k) time + . . .
( ( timeint ( k) time )/ nextservice 1 ) . . .
e p s i l o n / s r a t e ( k ) log (rand ) ;
k=k+1;
end
else
%calculate the service
%increase the time
time=time+n e x t s e r v i c e ;
%set the data to be (time serviced finished , queue 1)
timeData ( i )=time ;
sizeData ( i )=sizeData ( i 1) 1;
%recalculate the next arrival and service times
%the arrival time is the difference between what it
%last was and the time it took the current service
n e x t a r r i v a l=n e x t a r r i v a l n e x t s e r v i c e ;
%the next service time is exponentially distributed
n e x t s e r v i c e= e p s i l o n / s r a t e ( j ) log (rand ) ;
43
% if the service doesn ? t finish before the interval ,
% let the server run at the rate of the interval , then
% recalculate the rate for any after that
k=j +1;
while ( timeint ( k)< endTime e t o l ) &&...
( n e x t s e r v i c e > timeint ( k) time )
n e x t s e r v i c e = timeint ( k) time + . . .
( ( timeint ( k) time )/ nextservice 1 ) . . .
e p s i l o n / s r a t e ( k ) log (rand ) ;
k=k+1;
end
end
%other case : the next arrival happens before the current
%service finishes
else
%set the time
time=time+n e x t a r r i v a l ;
% calculate empty time
timeData ( i )=time ;
sizeData ( i )=sizeData ( i 1)+1;
%find the top of the graph
i f sizeData ( i )>top+etol , top=top +1; end
%recalculate the next service and arrival times
44
n e x t s e r v i c e=nextservice n e x t a r r i v a l ;
n e x t a r r i v a l= e p s i l o n / arate ( j ) log (rand ) ;
end
%move to the next interval if necessary
while ( timeint ( j )< endTime e t o l ) && ( timeint ( j +1)