Random Time Change with Some Applications by Amy Peterson A thesis submitted to the Graduate Faculty of Auburn University in partial ful llment of the requirements for the Degree of Master of Science Auburn, Alabama May 4, 2014 Approved by Olav Kallenberg, Chair, Professor of Mathematics Ming Liao, Professor of Mathematics Erkan Nane, Professor of Mathematics Jerzy Szulga, Professor of Mathematics Abstract This thesis is a survey of known results concerning random time change and its applications. It will cover basic probabilistic concepts and then follow with a detailed look at major results in several branches of probability all concerning random time change. The rst of these major results is a theorem on how an increasing process adapted to a ltration can be used to transform the time scale and ltration. Next we show how an arbitrary continuous local martingale can be changed into a Brownian motion. We then show that a simple point process can be changed into a Poisson process using a random time change. Lastly, we look at an application of random time change to create solutions of stochastic di erential equations. ii Acknowledgments I would like to thank my advisor, Dr. Olav Kallenberg, for his advice and en- couragement. I would also like to thank my committee members for their support. Furthermore I am grateful to everyone at the Auburn University Mathematics De- partment, my family, and friends. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 De nitions and Primary Concepts . . . . . . . . . . . . . . . . . . . . 2 1.3 Martingales and Brownian Motion . . . . . . . . . . . . . . . . . . . . 5 2 Time Change of Filtrations . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Time Change of Filtrations . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Time Change of Continuous Martingales . . . . . . . . . . . . . . . 11 3.1 Quadratic Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Stochastic Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Brownian Motion as a Martingale . . . . . . . . . . . . . . . . . . . . 15 3.4 Time Change of Continuous Martingales . . . . . . . . . . . . . . . . 17 3.5 Time Change of Continuous Martingales in Higher Dimensions . . . . 21 4 Time Change of Point Processes . . . . . . . . . . . . . . . . . . . . 23 4.1 Random Measures and Point Processes . . . . . . . . . . . . . . . . . 23 4.2 Doob-Meyer Decomposition . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Time Change of Point Processes . . . . . . . . . . . . . . . . . . . . . 24 5 Application of Time Change to Stochastic Di erential Equations 30 5.1 Stochastic Di erential Equations . . . . . . . . . . . . . . . . . . . . 30 5.2 Brownian Local Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.3 Application of Time Change to SDEs . . . . . . . . . . . . . . . . . . 33 iv Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 v Chapter 1 Introduction 1.1 Summary This thesis discusses the subject of random time change by looking at several known results in various areas of probability theory. In the rst chapter, we give several basic de nitions and theorems of probability theory, including a section dis- cussing martingales and Brownian motion. These de nitions and theorems will be used throughout the thesis and are in most basic probability texts. In the second chapter we began with our results on random time change. The main result of the second chapter discusses how an increasing process adapted to a ltration can be used to create a process of optional times that transform the time scale and ltration. This theorem will appear in the following chapters particularly in regard to the creation of a process of optional times. In the third chapter we begin with a discussion of the quadratic variation process and stochastic integration. These topics are also fundamental in probability theory, and will be important for all further results in the thesis. We will omit some of the proofs of the more detailed results but will include references. We then use these new concepts to prove L evy?s characterization of Browian motion. This theorem shows that a Brownian motion is a martingale and gives conditions for a continuous local martingale to be a Brownian motion. We then use L evy?s characterization of Brownian motion to prove the main result of chapter three that is, using a process of optional times, we can change an arbitrary continuous local martingale into a 1 Brownian motion. Our process of optional times depends on the quadratic variation of the local martingale and so we will break the proof of the main result into two cases depending on whether the limit of the quadratic variation is nite or in nite. Lastly in chapter three, we discuss two di erent ways to extend our main result to higher dimensions. To start our fourth chapter we introduce random measures, point processes, and Poisson processes. Following that we introduce the Doob-Meyer decomposition of a submartingale and explain its relation to random measures. The Doob-Meyer decomposition is an in-depth topic in probability theory we will only mention it and give reference for further study. Lastly, we proceed to prove the main result of the chapter, that is, an arbitrary simple point process to change it into a Poisson process. In our last chapter, we will look at an application of some of our previous results to stochastic di erential equations (SDEs). We began the chapter by discussing what stochastic di erential equations are and the type of stochastic di erential equations we are interested in. We then discuss the challenging topic of Brownian local time and continuous additive functionals. Lastly we prove a necessary and su cient conditions for a solution to certain stochastic di erential equations by constructing solutions to the SDEs using random time change. 1.2 De nitions and Primary Concepts Let ( ;A;P) be a probability space and T be a subset of R = [ 1;1]. A non-decreasing family F = (Ft) of - elds such that Ft A for t2T is called a ltration on T. A process X is said to be adapted to a ltration F = (Ft) if Xt is Ft-measurable for every t 2 T. Given a process X, the smallest ltration F such that X is adapted to F is the ltration generated by X, that is Ft = fXs; s tg. Also de ne F1 = (St 0Ft). If F is a ltration on T = R+ we can de ne another 2 ltrationF+t = Th>0Ft+h. We call a ltrationF on R+ right-continuous ifF =F+. Note that F+ = (F+)+ so F+ itself is right-continuous. Unless stated otherwise, ltrations on R+ are assumed to be right-continuous. Let Ft = (Xt) for some process X and let Nt =fF ;F G;G2Ft;P(G) = 0g, N1 is the collection of all null sets. Then the ltration H de ned by Ht = (Ft[Nt) for all t 0 is called the completion of the ltration. Any ltration that has the above properties is called a complete ltration. A random time is a measurable mapping : ! T, where T is the closure of T. Given a ltration F on T, a random time is called an optional time if f!; (!) tg2Ft for every t2T. Further we call a random time weakly optional if f!; (!) < tg2Ft for every t2T. We de ne the - eld F associated with an optional time by F =fA2A; A\f tg2Ft; t2Tg: The rst lemma shows that weakly optional and optional are the same on a ltration that is right-continuous. Lemma 1.2.1 If F is any ltration and is F-optional time then it is F weakly optional. If F is a right-continuous ltration and is a F-weakly optional time then is F-optional. Proof. Let be an F-optional time. Now f 0 f r] = 0: First we prove a general result about uniformly integrable processes. Lemma 1.3.1 For p> 1, every Lp-bounded process is uniformly integrable. Proof. Assume X is bounded in Lp, then E(jXtjp) <1. Let p and q be such that 1 p + 1 q = 1 then, from H older?s inequality, we get for u 0 E(jXtj 1jXj>u) (E(jXjp))1=p (E(j1jXj>ujq)1=q: 5 Thus X is uniformly integrable. 2 A process M is called a local martingale if it is adapted to a ltration F and there exist optional times n such that n"1 and the process M0t = M n^t M0 is a martingale for every n. A Brownian motion is a continuous process B in R with independent increments, B0 = 0, and, for all t 0, EBt = 0 and Var(Bt) = t. This de nition implies that Bt is normally distributed with mean 0 and variance t. A process B in Rd is called a Brownian motion if its components are independent Brownian motions in R. A Brownian motion B adapted to a general ltration F on R+ such that the process Bs+t Bs is independent of Fs for all s 0 is said to be an F-Brownian motion. A process X on R+ is said to be right-continuous if Xt = Xt+ for all t 0, and X has left limits if the left limits Xt exist and are nite for all t 0. The regularization theorem of martingales allows us to assume all martingales to be right- continuous with left limits, here abbreviated as rcll. We state, without proof, the more general version of this theorem for submartingales. We follow with a result relating uniform integrability to the convergence of a martingale to a random variable. These are classical results in the study of martingales we refer to [3] for the proofs and more detailed discussion. Theorem 1.3.1 Let X be an F-submartingale. Then X has a rcll version if and only if EX is right-continuous. Theorem 1.3.2 Let M be a right-continuousF-martingale on an unbounded index set T and de ne u = supT. Then the following conditions are equivalent: i) M is uniformly integrable, ii) Mt converges in L1 to some Mu as t!1, iii) M can be extended to a martingale on T[fug. 6 The next result, the optional sampling theorem, shows that, under certain con- ditions, the martingale property is preserved under a random time change. Theorem 1.3.3 Let M be an F-martingale on R+, where M is right-continuous, and consider two optional times and , where is bounded. Then M is integrable, and M ^ = E[M jF ] a.s. The statement extends to unbounded times if and only if M is uniformly integrable. 7 Chapter 2 Time Change of Filtrations In this section, we begin by showing how we can use an increasing process X adapted to a ltrationF to transform the time scale and the ltration. We will then apply this result in chapter 3 to the case where X, the increasing process, is the quadratic variation process of a continuous local martingale, and in chapter 4 when the increasing process is the compensator of a increasing process related to a point process. 2.1 Time Change of Filtrations We now state our main result using increasing process X adapted to a ltration F that will transform the time scale and the ltration. Theorem 2.1.1 Let X 0 be a non-decreasing right-continuous process adapted to some right-continuous ltration F, and de ne s = infft> 0;Xt >sg; s 0: Then i) ( s) is a right-continuous process of optional times, generating a right-continuous ltration G de ned by Gs =F s for s 0, ii) if X is also continuous and is F-optional, X is G-optional and F GX . 8 Note that, when composing the process X with an optional time , we get a random variable X . Thus it makes sense to consider X as an optional time. Proof. (i) Since X is right-continuous, the process ( s) is right-continuous as well. We want to show that s is an optional time for every s. By de nition of s, f s sg; t> 0: To prove the inclusion in the opposite direction, x an !2f s s. Since (s;1) is an open set containing Xt0(!), there exists a neighborhood around Xt0(!) that remains in the set (s;1). If t0 is rational we have proved the inclusion. If not, since X is right-continuous and Q is dense in R, there exists an r2Q such that t0 < r < t and Xr(!) 2 (s;1). So !2fXr >sg, and f s sg; t> 0: Therefore, f s sg2Ft; t> 0; which means that s is weakly optional hence s is optional. Since s is a process of optional times, Gs = F s is a ltration and we need to show that it is right-continuous. Now G+s = \ u>s Gu = \ u>s F u = \ u>s F+ u =F+ s =F s =Gs where the second and last equality come from the fact that Gs = F s. The third equality holds because F is right-continuous, and the fourth equality holds since u# s. 9 (ii) Let X be continuous and let > 0 be anF-optional time. By the de nition of s and the fact that X is non-decreasing, we see that fX sg=f sg. Since and s are both optional times, f tgandf s tgareFt-measurable, and since Ft is a - eld, we have f sg=f tg\f s tgc2Ft: Sof sg2F s by de nition ofF s. Thus X is aG-optional time. We can extend to any 0 by Lemma 1.2.2. Since X is an optional time,GX is a - eld. If we let A2F be arbitrary, the above arguments give A\fX sg= A\f sg2F s 2Gs: This shows that, for any A2F , we also have A2GX , and so F GX . 2 10 Chapter 3 Time Change of Continuous Martingales In order to change an arbitrary continuous local martingale into a Brownian motion, we will use a process of optional times such as in Theorem 2.1.1, except that our non-decreasing process will be the quadratic variation process of the continuous local martingale. Before getting to this result, we de ne the quadratic variation process and state some lemmas pertaining to it. Then we will prove L evy?s theorem, which characterizes Brownian motion as a martingale. This will be used in our proof of the main result. 3.1 Quadratic Variation For local martingales M and N, the process [M;N] is called the covariation of M and N, and the process [M;M] is called the quadratic variation. It is often denoted by [M]. The quadratic variation process can be constructed as a limit of the sum of squares of the original process; however, we will de ne the process based on a martingale characterization. We state, without proof, the existence theorem of the process [M;N] for continuous local martingales M and N. Theorem 3.1.1 For continuous local martingales M and N there exists an a.s. unique continuous process [M;N] of locally nite variation, with [M;N]0 = 0 and such that MN [M;N] is a local martingale. The next lemma lists, without proof, several properties of the covariation process. 11 Theorem 3.1.2 Let M and N be continuous local martingales, and let [M;N] be the covariation process de ned in Theorem 3.1.1. Then [M;N] is a.s. bilinear, symmetric, and satis es [M;N] = [M M0;N N0]: Further, [M] is a.s. non-decreasing and, for any optional time , [M ;N] = [M ;N ] = [M;N] a.s. The next result shows that a local martingale has the same intervals of constancy as its quadratic variation process. Lemma 3.1.1 Let M be a continuous localF-martingale, and x any ss; [M]w > [M]sg is an F-optional time. Also, Nr = M ^(s+r) Ms; r 0; is a continuous local martingale with respect to the ltration ^Fr =F(s+r); r 0. By the de nition of , [N]r = [M] ^(s+r) [M]s = 0; r 0: 12 Since N is a local martingale, there exists a sequence of optional times n such that n"1 a.s. and each process N n^t is a true martingale. Now [N] n^r = 0 a.s. and E(N2 n^r [N] n^r) = E(N2 n^r): Since N2 n^r [N] n^r is a martingale, E(N2 n^r [N] n^r) = 0: So E(N2 n^r) = 0, and thus N n^r = 0 a.s. Letting n"1, we get N = 0 a.s. Thus M ^(s+r) = Ms, and so Mu = Mt for any u2[s;t]. To prove the converse, assume Mu = Mt for all s u < t. Then = inffw > s;Ms < Mwg is an optional time. Now Nr = M ^(s+r) Ms is a continuous local martingale with respect to ^F, de ned by ^Fr =Fs+r. By de nition of , Nr = M ^(s+r) Ms = 0: Let n be a sequence of optional times such that n"1 and Nr^ n is a martingale. Then N2r^ n [N]r^ n is a martingale and E[N2r^ n [N]r^ n] = 0. Since Nr = 0, we have E[N]r^ n = 0. Letting n ! 1, we have E[N]r = 0, which gives us [M ^(s+r) Ms] = 0 a.s. And so we have [M]s = [M]t. 2 3.2 Stochastic Integration We now introduce the concept of stochastic integration. We will start by de ning an elementary stochastic integral as a sum of random variables. Let n be optional 13 times and k be bounded F k-measurable random variables, and de ne Vt = X k n k1ft> kg; t 0: Then for any process X, we may de ne the integral process V X by (V X)t = Z t 0 VsdXs = X k n k(Xt Xt^ k): We call the process V X an elementary stochastic integral. A process V on R is said to be progressively measurable, or simply progressive, if its restriction to [0;t] is Ft B[0;t] - measurable for every t 0. Originally stochastic integrals were extended to progressive processes using an approximation of the elementary stochastic integrals de ned above. However in the following theorem we extend the notion of stochastic integrals by a martingale characterization. Theorem 3.2.1 Let M be a continuous local martingales and V a progressive process such that (V2 [M])t < 1 a.s. for every t > 0. Then there exists an a.s. unique continuous local martingale V M with (V M)0 = 0 and such that [V M;N] = V [M;N] a.s. for every continuous local martingale N. Since covariation has locally nite variation, the integral V [M;N] is a Lebesgue- Steljes integral. This allows us to uniquely characterize the stochastic integral in terms of a Lebesgue-Steljes integral. We omit the proof of this theorem but refer to [3] for the proof and a more detailed discussion of stochastic integrals. 14 A continuous process X is said to be a semi-martingale if it can be represented as a sum M+A, where M is a continuous local martingale and A is a continuous, adapted process with locally nite variation and A0 = 0. If X is a semi-martingale and f is a su ciently smooth function then f(X) is also a semi-martingale. The following result gives a useful representation of semi-martingales that are images of smooth functions. We state, without proof, It^o?s formula for continuous semi-martingales. Here f0i and f00ij represent rst and second partial derivatives of f. Theorem 3.2.2 If X = (X1;:::;Xd) is a continuous semi-martingales in Rd and f is a function that is twice continuously di erentiable in Rd. Then f(X) = f(X0) + X i f0i(X) Xi + 12 X i X j f00ij(X) [Xi;Xj] a.s. We can extend It^o?s formula to analytic functions. Theorem 3.2.3 If f is an analytic function on D C. Then f(Z) = f(Z0) + X i f0i(Z) Zi + 12 X i X j f00ij(Z) [Zi;Zj] a.s. holds for any D-valued semi-martingale Z. 3.3 Brownian Motion as a Martingale In this section we show the following result, due to L evy, which characterizes Brownian Motion as a martingale. Theorem 3.3.1 Let B be a continuous process in R with B0 = 0. Then B is a local F-martingale with [B]t = t a.s. if and only if B is an F-Brownian motion. Before we begin the proof of the theorem we prove a needed result. 15 Lemma 3.3.1 Let M be a continuous local martingale starting at 0 with [M]t = t a.s. Then M is a square integrable martingale. Proof. Let n be optional times such that n"1 and M n^t is a true martingale for every n. Then Nt = M2 n^t [M] n^t is a martingale for every n and EM2 n^t = E[M] n^t = E( n^t): Using dominated and monotone convergence, we can let n!1 to get EM2t = t: Thus M2t [M]t is a true martingale and M is a square integrable martingale. 2 Now we move on to the proof of Theorem 3.3.1. Proof. First assume that B is a continuous local F-martingale with [B]t = t a.s. and B0 = 0. Recalling the de nition of Brownian motion and the characteristic function for a random variable with normal distribution, it is enough to prove for a xed set A2Fs E[eiv(Bt Bs)jA] = e v2(t s)=2 a.s. for v2R and t>s 0. Let f(x) = eivx then, applying Theorem 3.2.3, we get eivBt eivBs = Z t s iveiBudBu 12 Z t s v2eicBudu: (3.1) Now [B]t = t implies that B is a true martingale by Lemma 3.3.1, and so E Z t s eivBudBujFs = 0 a.s. (3.2) 16 Let A2Fs and multiply equation (3.1) by e ivBs1A on both sides to obtain 1Aeiv(Bt Bs) 1A = Z t s iv1Aeic(Bu Bs)dBu 12 Z t s v21Aeic(Bu Bs)du: Taking the expectation of both sides and recalling (3.2), we have E(1Aeiv(Bt Bs)) P(A) = 12v2E Z t s 1Aeiv(Bu Bs)du: This is a Volterra integral equation of the second kind for the deterministic function t7!Eeiv(Bt Bs). Solving this integral equation we have Eeiv(Bt Bs) = e v2=2(t s): To prove the converse, we assume that B is an F-Brownian motion. To show B is a martingale, let s t, E[BtjFs] = E[Bs +Bt BsjFs] = E[Bs +Bt sjFs] = Bs: 2 3.4 Time Change of Continuous Martingales We now show how we can use a process of optional times to change an arbitrary continuous local martingale into a Brownian motion. To do this in the general case, we consider extensions of probability space and extensions of ltrations. Let X be a process adapted to the ltration F on probability space ( ;A;P). Now we wish to nd a Brownian motion B independent of X. In order to guarantee 17 that the processes in question are independent and still retain any original adapted- ness properties we extended the probability space to a new probability space. Let ^ = [0;1], ^A = A B[0;1], and ^P = P [0;1] then (^ ; ^A; ^P) is an extension of the probability space. We can de ne X(!;t;0) = X(!;t) and B(!;t;1) = B(!;t) then X and B are trivially independent. A subtler way to achieve the same goal is to take a standard extension of a ltration. We call the ltration G a standard extension of F if Ft Gt for all t 0 and if Gt and F are conditionally independent given Ft for all t 0. Now we state the main theorem. Theorem 3.4.1 Let M be a continuous localF-martingale in R with M0 = 0, and de ne s = infft 0; [M]t >sg; Gs =F s; s 0: Then there exists in R a Brownian motion B with respect to a standard extension of G, such that a.s. B = M on [0;[M]1) and M = B [M]. We will break the proof into two cases, rst the case when [M]1 = 1 and secondly when [M]1 is nite. If [M]1 =1 we do not require a standard extension of the ltration for M to be a Brownian motion. Proof. First assume that [M]1 = 1. By Theorem 2:1:1, s is a right-continuous process of optional times andGs =F s is a right-continuous ltration. To prove that B = M is a Brownian motion, we will use L evy?s characterization of Brownian mo- tion, Theorem 3.3.1. Thus we need to show that B is a continuous square-integrable martingale and [B]t = t a.s. 18 First we prove that B is a continuous square integrable martingale. For xed s 0, ( ^Mt) = (M s^t) is a true martingale, and [ ^M]t [M] s = s; t 0; by the de nition of s. Because E ^M2t = E[ ^M]t s we can apply 1.3.1 to get ^M and ^M2 [ ^M] are uniformly integrable. This allows us to use the optional sampling theorem, Theorem 1.3.3. Fix 0 r s. Then E(M s M rjF r) = E( ^M s ^M rjF r) = ^M r ^M r = 0: Recall that ^M is a true martingale starting at zero. Hence ^M2t [ ^M]t = 0, which gives ^M2t = [ ^M]t. Now E((M s M r)2jF r) = E(( ^M s ^M r)2jF r) = E( ^M2 s ^M2 rjF r) = s r: Now B is a square-integrable martingale with [B]s = s. Next we want to prove that B is continuous. Referring to Lemma 3.1.1, we see that, for any s Q, 1f r =1g= 1 for all r2[Q;s]. And so for every s>Q [N]s = [X]s [X]Q = s Q = s [M]1: So if ssg; F s =Gs; s 0: Then there exists a Brownian motion B such that B = M a.s. on [0;[M1]1) with respect to a standard extension of G and M = B a.s. 21 We omit the proof. However, the isotropic condition leads to a very similar proof to that of the one-dimensional case. It is important to note that in this case we only needed a single time change process to transform our local martingale. Our next result will use a weaker assumption but will also have a weaker assertion. Our next result gives another way to extend the result of Theorem 3.4.1 to higher dimensions. We de ne a continuous local martingales M1;:::;Md to be strongly orthogonal if [Mi;Mj] = 0 a.s. for all i;j 2f1:::dg with i 6= j. Under the weaker assumption of strong orthogonality, we must use individual processes of optional times to transform each component of the local martingale into a Brownian motion. Theorem 3.5.2 Let M1;:::;Md be strongly orthogonal continuous local martin- gales starting at zero, then de ne is = infft 0; [Mi]t >sg; s 0; 1 i d; where is an optional time, for each i and s. Then the processes Bis = Mi is; s 0; 1 i d; are independent one-dimensional Brownian motions. Obviously the individual components are transformed to Brownian motions from our proof of the one-dimensional case. However we need these one-dimensional Brow- nian motions to be independent in order to combine them into a Brownian motion in Rd. This can be achieved through looking at the ltrations induced by the Brownian motions but not the ltrations F is. We will omit the proof of Theorem 3.5.2 but a full proof can be found in [4]. 22 Chapter 4 Time Change of Point Processes The main result of this section, similar to Theorem 3.4.1, shows that a random time change can be used to transform a point process into a Poisson process. To do this, we introduce some more notation and de nitions. 4.1 Random Measures and Point Processes Let ( ;A) be a probability space and (S;S) a measurable space. A random measure on S is de ned as a mapping : S! R+ such that (!;B) is an A-measurable random variable for xed B2S and a locally nite measure for xed ! 2 . We de ne a point process as a random measure on Rd such that B is integer-valued for every bounded Borel set B. For a stationary random measure on R, E = c , where 0 c 1 and is the Lebesgue measure, is called the intensity measure of and c the rate. De ne M(S) to be the space of all - nite measures on a measurable space S. A Poisson process with intensity 2M(Rd) is de ned to be point process with independent increments such that B is a Poisson random variable with mean B whenever B <1. A point process with fsg 1 for all s2Rd outside a xed P-null set is called simple. And a Poisson process is of unit rate if it has rate equal to a one. We now assume that the underlying probability space has a ltration that is not only right-continuous but also complete, and let (S;S) be a Borel space. The predictable - eld P in the product space R+ is de ned as the - eld generated by all continuous, adapted processes on R+. A process V on R+ S is predictable if it 23 isP S-measurable whereP denotes the predictable - eld in R+ . We mention, without proof the fact that the predictable - eld is generated by all left-continuous adapted processes and that every predictable process is progressive. 4.2 Doob-Meyer Decomposition Another new concept of this section needed for our main result is the compensator process. First we de ne compensators in relation to the Doob-Meyer decomposition of submartingales and then extend the notion to random measures. Theorem 4.2.1 (Doob-Meyer Decomposition) Any local submartingale X has an a.s. unique decomposition X = M + A, where M is a local martingale and A is a locally integrable, nondecreasing, predictable process starting at 0. The proof is omitted since it is very involved and would distract from the main topic of time change, we refer to [3] for a detailed proof. The process A in the above theorem is called the compensator of the submartingale X. We want to extend compensators to random measures. Let be a random mea- sure on R+ and introduce the associated cumulative process Nt(!) = ((0;t];!). The process N has right-continuous, a.s. nondecreasing paths and N is a submartingale. Now we can apply the Doob-Meyer decomposition to N to get its compensator A which will also be the cumulative process of a random measure. We will use compensators similarly to the way the quadratic variation process was used in Theorem 3.4.1 to de ne our process of optional times. 4.3 Time Change of Point Processes We now move on to prove our main result, that a process of optional times can be used to transform a point process into a Poisson process. Before stating the main 24 result, we need several important theorems. This approach is from [1]. The rst of those uses only some basic analysis; however, we will soon relate it to probability. Theorem 4.3.1 Let f(x) be an increasing, right-continuous function on R with f(0) = 0, and let u(x) be a measurable function with Rt0ju(x)jdf(x) < 1 for each t> 0. Let f(t) = f(t) f(t ) and fc(t) = f(t) X s t f(s): Then the integral equation h(t) = h(0) + Z t 0 h(s )u(s)df(s) has the unique solution h(t) = h(0) Y 0sg. Then the re-scaled process N s = (0;t] where is a unit-rate Poisson process. Proof. Referring back to Theorem 2.1.1, we see that s is right-continuous, and N s isF s-adapted. Further by continuity of A, s can only have jumps at countably many t 0. By de nition of N and A, N s can only increase by integer-valued jumps. Since s is right-continuous with left limits, the only jumps in s are when A is constant. Assume A is constant over the interval (a;b] by the martingale property of compensators, E[Nb NajFa] = Ab Aa = 0: 28 So givenFa Nb Na = 0 a.s. Thus there are no jumps in N s when s is discontinuous or has a jump. Since N is simple, when s is continuous N s can only increase by unit jumps. Therefore N s is simple. Referring to Theorem 4.3.2, we only need to show N s that has compensator s. By de nition A s = s for all s 0. Recalling s is an optional time for each s 0, we can apply the optional sampling theorem, for 0 s t, E[N t tjF s] = E[N t A tjF s] = N s A s = N s s: So, N s s is a F s-martingale, and by the uniqueness of the compensator, s is the compensator of N s. 2 29 Chapter 5 Application of Time Change to Stochastic Di erential Equations In this last chapter we discuss an application of the previous ideas on random time change to the area of stochastic di erential equations (SDEs). First we de ne stochastic di erential equations and some basic related concepts. Then we discuss the concept of Brownian local time. Lastly we create solutions to certain SDEs using optional times to prove Engelbert and Schmidt?s necessary and su cient conditions for solutions to certain SDEs. 5.1 Stochastic Di erential Equations Our theorems involving stochastic di erential equations, abbreviated SDEs, are of the basic form dXt = (Xt)dBt +b(Xt)dt (5.1) where B is a one-dimensional Brownian motion, and and b are measurable functions on R. For our purposes we only de ne stochastic di erential equations in the one dimensional case, but the concept can extend to higher dimensions. We refer to [2] for more information on general SDEs. We de ne a weak solution of the stochastic di erential equation with initial distribution to be a process X, a probability space ( ;F;P) a Brownian motion B, and a random variable withL( ) = , such that X satis es (1) for ( ;F;P), B, and X0 = . Further weak existence holds for a stochastic di erential equation provided there is a weak solution to the SDE. Uniqueness in 30 law means that any two weak solutions with initial distribution have the same distribution. It is also often possible to remove the drift term from the above SDE by either a change in the underlying probability measure or a change in the state space. In this way we can reduce our SDE to dXt = (Xt)dBt: Using this SDE without a drift-term it is possible to construct weak solutions using random time change. We will discuss this after we introduce Brownian local time. For further discussion and proofs on removing the drift term we refer to [2]. 5.2 Brownian Local Time Let B be a Brownian motion and x2R. To gain information about the time a path of B spends near x we would look at the set ft 0; Bt(!) = xg however this set has Lebesgue measure zero. So in order to gain information about the time a Brownian path spends around a point x, we introduce the process L. Theorem 5.2.1 Let B be a Brownian motion then there exists an a.s. jointly continuous process Lxt on R+ R, such that for every Borel set A of R and t 0, Z t 0 1fBs2Agds = Z A Lxtdx: The process L de ned in the theorem above is called the local time of the Brow- nian motion B. We can also represent the local time at some point x2 R of any 31 semi-martingale X by the following formula, due to Tanaka, Lxt =jXt xj jX0 xj Z t 0 sgn(Xs x)dXs; t 0; where sgn(x) = 8> < >: 1 x> 0 1 x 0: Next we de ne a nondecreasing, measurable, adapted process A in R to be a continuous additive functional if, for every x2R, At+s = As +At s a.s., s;t 0; where s is a shift operator for s 0. Now we state without proof the relationship between continuous additive func- tionals of Brownian motion and local time of Brownian motion Theorem 5.2.2 For Brownian motion X in R with local time L a process A is a continuous additive functional of X i it has a.s. representation At = Z 1 1 Lxt (dx); t 0; for some locally nite measure on R. Refer to [3] for more information about continuous additive functionals and local time of Brownian motion. 32 5.3 Application of Time Change to SDEs In this section we use random time change to create weak solutions to SDEs in the one-dimensional case, dXt = (Xt)dBt; (5.2) with initial distribution . We now give the informal construction of weak solutions to 5.2. To do this, rst let Y be a Brownian motion with respect to some ltrationF and X0 be aF0-measurable random variable with distribution . Now we look at the continuous process Zt = X0 + Yt for t 0. Using Z we create a process of optional times t = Z t 0 2(Zs)ds; t 0: Now we create the inverse process, s = infft 0; t >sg; s 0: Referring to Theorem 2:1:1 we see that s is also a process of optional times. Now for Xs = Z s with ltration Gs =F s we can nd a Brownian motion B with respect to G such that they form a weak solution to dXt = (Xt)dBt with initial distribution . Problems with this construction could occur depending on the measurable func- tion . Also we have not described how the Brownian motion B is found. To an- swer these questions we will use this construction formally to prove Engelbert and 33 Schmidt?s theorem which gives the exact conditions must satisfy for a weak solution to exist. In the following proofs we will remove the condition that a Brownian motion B must have B0 = 0. This allows us to let our Brownian motion have initial distri- bution and removes our need for a random variable X0 in the above construction. Theorem 5.3.1 The SDE dXt = (Xt)dBt has a weak solution for every initial distribution if and only if I( ) Z( ) where, I( ) =fx2R; lim !0 Z x+ x dy 2(y) =1g; (5.3) and Z( ) =fx2R; (x) = 0g: (5.4) First we prove a lemma relating the additive functional of local time of Brownian motion to a real measure of the interval around a point of a Brownian process. Lemma 5.3.1 Let L be the local time of Brownian motion B with arbitrary initial distribution, and let be some measure on R. De ne At = Z Lt(x) (dx); t 0; and S =fx2R; lim !1 (x ;x+ ) =1g: Then a.s. inffs 0; As =1g= inffs 0; Bs2S g: 34 Proof. Let t > 0, and R be the event where Bs =2S on [0;t]. Now Lxt = 0 a.s. outside of the range of B on [0;t]. Then we get, a.s. on R At = Z 1 1 Lxt (dx) (B[0;t]) sup x Lxt <1; since the range of B on [0;t] is compact as the continuous image of a compact set and Lxt is a.s. continuous and hence bounded on a closed interval. Conversely assume that Bs2S for some s 0 by Tanaka?s formula, so then by the continuity of L with respect to x we get for some > 0 At = Z 1 1 Lxt (dx) (a+ ;a ) inf jx aj< Lxt =1: 2 We also need the following lemma, which shows that every continuous local martingale M can be represented as a stochastic integral with respect to a Brownian motion B. Lemma 5.3.2 Let M be a continuous local F-martingale with M0 = 0 and [M] = V2 a.s. for some F-progressive process V. Then there exists a Brownian motion B with respect to a standard extension of F such that M = V B. a.s. Proof. De ne B = V 1 M where V 1 = 1=V and V 1 = 0 if V = 0. As a stochastic integral with respect to a continuous local martingale B is a continuous 35 local martingale and [B]t = [V 1 M]t = Z t 0 (V 1s )2d[M]s = Z t 0 (V 1s )2V2s ds = t So B is a Brownian motion by Theorem 3.3.1 and M = V B a.s. However this only works if V does not become zero. If V should vanish, de ne Z to be a Brownian motion independent of F with induced ltration Z then G = fF;Zgis a standard extension of bothF andZ. Therefore V isG-progressive and M is a G-local martingale and X is a G-Brownian motion. Let B = V 1 M +U Z where U = 1fV = 0g. Now B is a Brownian motion. To see M = V B, we note that VU = 0 (V B)t = Z t 0 VsV 1s dMs + Z t 0 VsUsdZr = Mt + 0 = Mt 2 We proceed to prove Theorem 5.3.1. Proof. Assume I( ) Z( ). Let Y be a Brownian motion with respect to a ltration G and with initial distribution . De ne As = Z s 0 2(Yu)du; s 0: Also de ne t = inffs 0;As >tg; t 0; and 1 = inffs 0; As =1g: 36 Now let R = inffs 0; Ys2I( )g. By Lemma 5.3.1, R = 1. The processAs is continuous and strictly increasing forssg a.s. for s 0. By the optional sampling theorem, we have for 0 t1 t2 <1, E[Y t2^ AnjGt1] = E[Y t2^njGt1] = Y t1^n = Y t1 ^ An: Since An !1 as n!1, Y t is a continuous local martingale. Also Y2 t t is a continuous local martingale, and by the uniqueness of quadratic variation we have [Y] t = t for t 0. De ne Xt = Y t, then [X]t = t. For t AR, t = Z t 0 2(Yu)d Z u 0 2(Yr)dr = Z t 0 2(Yu)dAu: Then, by a change of variables, Z t 0 2(Y u)dA u = Z t 0 2(Xu)du: Thus we get t = Z t 0 2(Xu)du; t AR: 37 To show that equality holds for all t 0, rst we note that At =1 for all t R by Lemma 5.3.1. Now, t = 1 = R; t AR: To see that Rt0 2(Xu)du is also equal to R for t AR, we rst note that Xt = XAR = Y AR = YR; t AR: Recalling that R = inffs 0;Ys2I( )g and the original assumption I( ) Z( ), we see that (Xt) = (YR) = 0 t AR: Thus Rt0 2(Xu)du = t = R for t AR, which means that t = [X]t = Rt0 2(Xu)du for all t 0. By Lemma 5:3:2, there exists a Brownian motion B such that Xt = Rt 0 (Xu)dBu. So X is a weak solution to the stochastic di erential equation dXt = (Xt)dBt with initial distribution . To prove the converse, let x 2 I( ) and let X be a solution to the stochastic di erential equation dXt = (Xt)dBt with X0 = x. By the de nition of stochastic integrals, X is a continuous local martingale, and by Theorem 3:4:1 we haveXt = Y[X]t for some Brownian motion Y. Also, [X]t = [ (X) B]t = Z t 0 2(Xu)d[B]u = Z t 0 2(Xu)du: 38 Let t = [X]t. For s 0, de ne As = Rs0 2(Yr)dr. Then, for t 0, A t = Z t 0 2(Yr)dr = Z t 0 2(Xs)d s (5.5) = Z t 0 2(Xs)d( Z s 0 2(Xu)du) (5.6) = Z t 0 1f 2(Xs) > 0gds t: (5.7) Since X0 = x2I( ), Lemma 5.3.1 gives As = 1 for s > 0, so t = 0 a.s. which implies Xt = x a.s. Further, t = Rt0 2(Xs)ds = 0 a.s. and so x2Z( ). 2 In Theorem 5.3.1 we have just proved that a stochastic di erential equation dXt = (Xt)dBt has a necessary and su cient condition for weak existence. We now prove a necessary and su cient condition for uniqueness in law. Theorem 5.3.2 For every initial distribution , the stochastic di erential equa- tion dXt = (Xt)dBt has a solution which is unique in law i I( ) = Z( ), where I( ) is given by (5.3) and Z( ) by (5.4) in Theorem 5.3.1. Proof. By Theorem 5.3.1, I( ) Z( ) is the su cient condition for a solution to exist. So we must assume I( ) Z( ) in order to have a solution. To show that I( ) = Z( ) is necessary for uniqueness in law, we will prove the contraposition which is that if I( ) is a proper subset of Z( ) we can create solutions that are not unique in law. To this end, let I( ) Z( ) and x2Z( )nI( ). We can create a solution, as we did in Theorem 5.3.1, X = Y t where Y is a Brownian motion starting at x. And t = inffs> 0;As >tg for t 0 with As = Rs0 2(Yr)dr for s 0. To create another solution to the SDE, we let ^Xt x, which is a solution since x2Z( ). Both solutions X and ^X have the same initial distribution . However they are not equal in distribution. The solution ^X is constant. For the solution X, since 39 x =2I( ) we have As <1 a.s. for s> 0 by Lemma 5.3.1. So by de nition t > 0 a.s. for t> 0. So X as a time-changed Brownian motion is a.s. not constant. So X and ^X are not unique in law. Now we show that I( ) = Z( ) is a su cient condition for uniqueness in law. Once again, since Theorem 5.3.1 requires I( ) Z( ) for the existence of a solution, we only need to show I( ) Z( ) is su cient for uniqueness in law. Let I( ) Z( ) and let X be a solution to the SDE with initial distribution . Again Xt = Y t, where Y is a Brownian motion with initial distribution and t = Rt0 2(Xs)ds for t 0. De ne again As = Rs0 2(Yr)dr for s 0, and S = infft 0;Xt2I( )g. Now S = R = inffr 0;Yr2I( )g. Since S is the rst time Xt is in I( ) and since I( ) Z( ), then before time S, X is not in either set, and so, referring back to our argument (5.5), we have for t S A t = Z t 0 2(Yr)dr = Z t 0 1f 2(Xs) > 0gds = t: We also know that As = 1 for s R by Lemma 5.3.1, and so the argument (5.5) implies t R a.s. for all t. So is constant after time S. Now we can once again de ne t = inffs> 0;As >tg for t 0. This shows that is a measurable function of Y. Furthermore, since Xt = Y t, X is a measurable function of Y. Since Y is a Brownian motion with initial distribution , we know the distribution of Y. Since we can do the same thing for any solution X, they all must have distributions determined by . This proves uniqueness in law. 2 40 Bibliography [1] Daley, D.J. and Vere-Jones D. (2008). An Introduction to the Theory of Point Processes, Vol. I & II. Springer, NY. [2] Ikeda, N. and Watanabe S. (1989). Stochastic Di erential Equations and Di u- sion Processes, 2nd ed. North-Holland, Amsterdam. [3] Kallenberg, O. (2002). Foundations of Moderen Probability, 2nd ed. Springer, NY. [4] Karatzas, I. and Shreve S. (1991). Brownian Motion and Stochastic Calculus, 2nd ed. Springer, NY. 41