sg; F s =Gs; s 0: Then there exists a Brownian motion B such that B = M a.s. on [0;[M1]1) with respect to a standard extension of G and M = B a.s. 21 We omit the proof. However, the isotropic condition leads to a very similar proof to that of the one-dimensional case. It is important to note that in this case we only needed a single time change process to transform our local martingale. Our next result will use a weaker assumption but will also have a weaker assertion. Our next result gives another way to extend the result of Theorem 3.4.1 to higher dimensions. We de ne a continuous local martingales M1;:::;Md to be strongly orthogonal if [Mi;Mj] = 0 a.s. for all i;j 2f1:::dg with i 6= j. Under the weaker assumption of strong orthogonality, we must use individual processes of optional times to transform each component of the local martingale into a Brownian motion. Theorem 3.5.2 Let M1;:::;Md be strongly orthogonal continuous local martin- gales starting at zero, then de ne is = infft 0; [Mi]t >sg; s 0; 1 i d; where is an optional time, for each i and s. Then the processes Bis = Mi is; s 0; 1 i d; are independent one-dimensional Brownian motions. Obviously the individual components are transformed to Brownian motions from our proof of the one-dimensional case. However we need these one-dimensional Brow- nian motions to be independent in order to combine them into a Brownian motion in Rd. This can be achieved through looking at the ltrations induced by the Brownian motions but not the ltrations F is. We will omit the proof of Theorem 3.5.2 but a full proof can be found in [4]. 22 Chapter 4 Time Change of Point Processes The main result of this section, similar to Theorem 3.4.1, shows that a random time change can be used to transform a point process into a Poisson process. To do this, we introduce some more notation and de nitions. 4.1 Random Measures and Point Processes Let ( ;A) be a probability space and (S;S) a measurable space. A random measure on S is de ned as a mapping : S! R+ such that (!;B) is an A-measurable random variable for xed B2S and a locally nite measure for xed ! 2 . We de ne a point process as a random measure on Rd such that B is integer-valued for every bounded Borel set B. For a stationary random measure on R, E = c , where 0 c 1 and is the Lebesgue measure, is called the intensity measure of and c the rate. De ne M(S) to be the space of all - nite measures on a measurable space S. A Poisson process with intensity 2M(Rd) is de ned to be point process with independent increments such that B is a Poisson random variable with mean B whenever B <1. A point process with fsg 1 for all s2Rd outside a xed P-null set is called simple. And a Poisson process is of unit rate if it has rate equal to a one. We now assume that the underlying probability space has a ltration that is not only right-continuous but also complete, and let (S;S) be a Borel space. The predictable - eld P in the product space R+ is de ned as the - eld generated by all continuous, adapted processes on R+. A process V on R+ S is predictable if it 23 isP S-measurable whereP denotes the predictable - eld in R+ . We mention, without proof the fact that the predictable - eld is generated by all left-continuous adapted processes and that every predictable process is progressive. 4.2 Doob-Meyer Decomposition Another new concept of this section needed for our main result is the compensator process. First we de ne compensators in relation to the Doob-Meyer decomposition of submartingales and then extend the notion to random measures. Theorem 4.2.1 (Doob-Meyer Decomposition) Any local submartingale X has an a.s. unique decomposition X = M + A, where M is a local martingale and A is a locally integrable, nondecreasing, predictable process starting at 0. The proof is omitted since it is very involved and would distract from the main topic of time change, we refer to [3] for a detailed proof. The process A in the above theorem is called the compensator of the submartingale X. We want to extend compensators to random measures. Let be a random mea- sure on R+ and introduce the associated cumulative process Nt(!) = ((0;t];!). The process N has right-continuous, a.s. nondecreasing paths and N is a submartingale. Now we can apply the Doob-Meyer decomposition to N to get its compensator A which will also be the cumulative process of a random measure. We will use compensators similarly to the way the quadratic variation process was used in Theorem 3.4.1 to de ne our process of optional times. 4.3 Time Change of Point Processes We now move on to prove our main result, that a process of optional times can be used to transform a point process into a Poisson process. Before stating the main 24 result, we need several important theorems. This approach is from [1]. The rst of those uses only some basic analysis; however, we will soon relate it to probability. Theorem 4.3.1 Let f(x) be an increasing, right-continuous function on R with f(0) = 0, and let u(x) be a measurable function with Rt0ju(x)jdf(x) < 1 for each t> 0. Let f(t) = f(t) f(t ) and fc(t) = f(t) X s t f(s): Then the integral equation h(t) = h(0) + Z t 0 h(s )u(s)df(s) has the unique solution h(t) = h(0) Y 0~~sg. Then the re-scaled process N s = (0;t] where is a unit-rate Poisson process. Proof. Referring back to Theorem 2.1.1, we see that s is right-continuous, and N s isF s-adapted. Further by continuity of A, s can only have jumps at countably many t 0. By de nition of N and A, N s can only increase by integer-valued jumps. Since s is right-continuous with left limits, the only jumps in s are when A is constant. Assume A is constant over the interval (a;b] by the martingale property of compensators, E[Nb NajFa] = Ab Aa = 0: 28 So givenFa Nb Na = 0 a.s. Thus there are no jumps in N s when s is discontinuous or has a jump. Since N is simple, when s is continuous N s can only increase by unit jumps. Therefore N s is simple. Referring to Theorem 4.3.2, we only need to show N s that has compensator s. By de nition A s = s for all s 0. Recalling s is an optional time for each s 0, we can apply the optional sampling theorem, for 0 s t, E[N t tjF s] = E[N t A tjF s] = N s A s = N s s: So, N s s is a F s-martingale, and by the uniqueness of the compensator, s is the compensator of N s. 2 29 Chapter 5 Application of Time Change to Stochastic Di erential Equations In this last chapter we discuss an application of the previous ideas on random time change to the area of stochastic di erential equations (SDEs). First we de ne stochastic di erential equations and some basic related concepts. Then we discuss the concept of Brownian local time. Lastly we create solutions to certain SDEs using optional times to prove Engelbert and Schmidt?s necessary and su cient conditions for solutions to certain SDEs. 5.1 Stochastic Di erential Equations Our theorems involving stochastic di erential equations, abbreviated SDEs, are of the basic form dXt = (Xt)dBt +b(Xt)dt (5.1) where B is a one-dimensional Brownian motion, and and b are measurable functions on R. For our purposes we only de ne stochastic di erential equations in the one dimensional case, but the concept can extend to higher dimensions. We refer to [2] for more information on general SDEs. We de ne a weak solution of the stochastic di erential equation with initial distribution to be a process X, a probability space ( ;F;P) a Brownian motion B, and a random variable withL( ) = , such that X satis es (1) for ( ;F;P), B, and X0 = . Further weak existence holds for a stochastic di erential equation provided there is a weak solution to the SDE. Uniqueness in 30 law means that any two weak solutions with initial distribution have the same distribution. It is also often possible to remove the drift term from the above SDE by either a change in the underlying probability measure or a change in the state space. In this way we can reduce our SDE to dXt = (Xt)dBt: Using this SDE without a drift-term it is possible to construct weak solutions using random time change. We will discuss this after we introduce Brownian local time. For further discussion and proofs on removing the drift term we refer to [2]. 5.2 Brownian Local Time Let B be a Brownian motion and x2R. To gain information about the time a path of B spends near x we would look at the set ft 0; Bt(!) = xg however this set has Lebesgue measure zero. So in order to gain information about the time a Brownian path spends around a point x, we introduce the process L. Theorem 5.2.1 Let B be a Brownian motion then there exists an a.s. jointly continuous process Lxt on R+ R, such that for every Borel set A of R and t 0, Z t 0 1fBs2Agds = Z A Lxtdx: The process L de ned in the theorem above is called the local time of the Brow- nian motion B. We can also represent the local time at some point x2 R of any 31 semi-martingale X by the following formula, due to Tanaka, Lxt =jXt xj jX0 xj Z t 0 sgn(Xs x)dXs; t 0; where sgn(x) = 8> < >: 1 x> 0 1 x 0: Next we de ne a nondecreasing, measurable, adapted process A in R to be a continuous additive functional if, for every x2R, At+s = As +At s a.s., s;t 0; where s is a shift operator for s 0. Now we state without proof the relationship between continuous additive func- tionals of Brownian motion and local time of Brownian motion Theorem 5.2.2 For Brownian motion X in R with local time L a process A is a continuous additive functional of X i it has a.s. representation At = Z 1 1 Lxt (dx); t 0; for some locally nite measure on R. Refer to [3] for more information about continuous additive functionals and local time of Brownian motion. 32 5.3 Application of Time Change to SDEs In this section we use random time change to create weak solutions to SDEs in the one-dimensional case, dXt = (Xt)dBt; (5.2) with initial distribution . We now give the informal construction of weak solutions to 5.2. To do this, rst let Y be a Brownian motion with respect to some ltrationF and X0 be aF0-measurable random variable with distribution . Now we look at the continuous process Zt = X0 + Yt for t 0. Using Z we create a process of optional times t = Z t 0 2(Zs)ds; t 0: Now we create the inverse process, s = infft 0; t >sg; s 0: Referring to Theorem 2:1:1 we see that s is also a process of optional times. Now for Xs = Z s with ltration Gs =F s we can nd a Brownian motion B with respect to G such that they form a weak solution to dXt = (Xt)dBt with initial distribution . Problems with this construction could occur depending on the measurable func- tion . Also we have not described how the Brownian motion B is found. To an- swer these questions we will use this construction formally to prove Engelbert and 33 Schmidt?s theorem which gives the exact conditions must satisfy for a weak solution to exist. In the following proofs we will remove the condition that a Brownian motion B must have B0 = 0. This allows us to let our Brownian motion have initial distri- bution and removes our need for a random variable X0 in the above construction. Theorem 5.3.1 The SDE dXt = (Xt)dBt has a weak solution for every initial distribution if and only if I( ) Z( ) where, I( ) =fx2R; lim !0 Z x+ x dy 2(y) =1g; (5.3) and Z( ) =fx2R; (x) = 0g: (5.4) First we prove a lemma relating the additive functional of local time of Brownian motion to a real measure of the interval around a point of a Brownian process. Lemma 5.3.1 Let L be the local time of Brownian motion B with arbitrary initial distribution, and let be some measure on R. De ne At = Z Lt(x) (dx); t 0; and S =fx2R; lim !1 (x ;x+ ) =1g: Then a.s. inffs 0; As =1g= inffs 0; Bs2S g: 34 Proof. Let t > 0, and R be the event where Bs =2S on [0;t]. Now Lxt = 0 a.s. outside of the range of B on [0;t]. Then we get, a.s. on R At = Z 1 1 Lxt (dx) (B[0;t]) sup x Lxt <1; since the range of B on [0;t] is compact as the continuous image of a compact set and Lxt is a.s. continuous and hence bounded on a closed interval. Conversely assume that Bs2S for some s~~0 by Tanaka?s formula, so then by the continuity of L with respect to x we get for some > 0 At = Z 1 1 Lxt (dx) (a+ ;a ) inf jx aj< Lxt =1: 2 We also need the following lemma, which shows that every continuous local martingale M can be represented as a stochastic integral with respect to a Brownian motion B. Lemma 5.3.2 Let M be a continuous local F-martingale with M0 = 0 and [M] = V2 a.s. for some F-progressive process V. Then there exists a Brownian motion B with respect to a standard extension of F such that M = V B. a.s. Proof. De ne B = V 1 M where V 1 = 1=V and V 1 = 0 if V = 0. As a stochastic integral with respect to a continuous local martingale B is a continuous 35 local martingale and [B]t = [V 1 M]t = Z t 0 (V 1s )2d[M]s = Z t 0 (V 1s )2V2s ds = t So B is a Brownian motion by Theorem 3.3.1 and M = V B a.s. However this only works if V does not become zero. If V should vanish, de ne Z to be a Brownian motion independent of F with induced ltration Z then G = fF;Zgis a standard extension of bothF andZ. Therefore V isG-progressive and M is a G-local martingale and X is a G-Brownian motion. Let B = V 1 M +U Z where U = 1fV = 0g. Now B is a Brownian motion. To see M = V B, we note that VU = 0 (V B)t = Z t 0 VsV 1s dMs + Z t 0 VsUsdZr = Mt + 0 = Mt 2 We proceed to prove Theorem 5.3.1. Proof. Assume I( ) Z( ). Let Y be a Brownian motion with respect to a ltration G and with initial distribution . De ne As = Z s 0 2(Yu)du; s 0: Also de ne t = inffs 0;As >tg; t 0; and 1 = inffs 0; As =1g: 36 Now let R = inffs 0; Ys2I( )g. By Lemma 5.3.1, R = 1. The processAs is continuous and strictly increasing fors sg a.s. for s 0. By the optional sampling theorem, we have for 0 t1 t2 <1, E[Y t2^ AnjGt1] = E[Y t2^njGt1] = Y t1^n = Y t1 ^ An: Since An !1 as n!1, Y t is a continuous local martingale. Also Y2 t t is a continuous local martingale, and by the uniqueness of quadratic variation we have [Y] t = t for t 0. De ne Xt = Y t, then [X]t = t. For t AR, t = Z t 0 2(Yu)d Z u 0 2(Yr)dr = Z t 0 2(Yu)dAu: Then, by a change of variables, Z t 0 2(Y u)dA u = Z t 0 2(Xu)du: Thus we get t = Z t 0 2(Xu)du; t AR: 37 To show that equality holds for all t 0, rst we note that At =1 for all t R by Lemma 5.3.1. Now, t = 1 = R; t AR: To see that Rt0 2(Xu)du is also equal to R for t AR, we rst note that Xt = XAR = Y AR = YR; t AR: Recalling that R = inffs 0;Ys2I( )g and the original assumption I( ) Z( ), we see that (Xt) = (YR) = 0 t AR: Thus Rt0 2(Xu)du = t = R for t AR, which means that t = [X]t = Rt0 2(Xu)du for all t 0. By Lemma 5:3:2, there exists a Brownian motion B such that Xt = Rt 0 (Xu)dBu. So X is a weak solution to the stochastic di erential equation dXt = (Xt)dBt with initial distribution . To prove the converse, let x 2 I( ) and let X be a solution to the stochastic di erential equation dXt = (Xt)dBt with X0 = x. By the de nition of stochastic integrals, X is a continuous local martingale, and by Theorem 3:4:1 we haveXt = Y[X]t for some Brownian motion Y. Also, [X]t = [ (X) B]t = Z t 0 2(Xu)d[B]u = Z t 0 2(Xu)du: 38 Let t = [X]t. For s 0, de ne As = Rs0 2(Yr)dr. Then, for t 0, A t = Z t 0 2(Yr)dr = Z t 0 2(Xs)d s (5.5) = Z t 0 2(Xs)d( Z s 0 2(Xu)du) (5.6) = Z t 0 1f 2(Xs) > 0gds t: (5.7) Since X0 = x2I( ), Lemma 5.3.1 gives As = 1 for s > 0, so t = 0 a.s. which implies Xt = x a.s. Further, t = Rt0 2(Xs)ds = 0 a.s. and so x2Z( ). 2 In Theorem 5.3.1 we have just proved that a stochastic di erential equation dXt = (Xt)dBt has a necessary and su cient condition for weak existence. We now prove a necessary and su cient condition for uniqueness in law. Theorem 5.3.2 For every initial distribution , the stochastic di erential equa- tion dXt = (Xt)dBt has a solution which is unique in law i I( ) = Z( ), where I( ) is given by (5.3) and Z( ) by (5.4) in Theorem 5.3.1. Proof. By Theorem 5.3.1, I( ) Z( ) is the su cient condition for a solution to exist. So we must assume I( ) Z( ) in order to have a solution. To show that I( ) = Z( ) is necessary for uniqueness in law, we will prove the contraposition which is that if I( ) is a proper subset of Z( ) we can create solutions that are not unique in law. To this end, let I( ) Z( ) and x2Z( )nI( ). We can create a solution, as we did in Theorem 5.3.1, X = Y t where Y is a Brownian motion starting at x. And t = inffs> 0;As >tg for t 0 with As = Rs0 2(Yr)dr for s 0. To create another solution to the SDE, we let ^Xt x, which is a solution since x2Z( ). Both solutions X and ^X have the same initial distribution . However they are not equal in distribution. The solution ^X is constant. For the solution X, since 39 x =2I( ) we have As <1 a.s. for s> 0 by Lemma 5.3.1. So by de nition t > 0 a.s. for t> 0. So X as a time-changed Brownian motion is a.s. not constant. So X and ^X are not unique in law. Now we show that I( ) = Z( ) is a su cient condition for uniqueness in law. Once again, since Theorem 5.3.1 requires I( ) Z( ) for the existence of a solution, we only need to show I( ) Z( ) is su cient for uniqueness in law. Let I( ) Z( ) and let X be a solution to the SDE with initial distribution . Again Xt = Y t, where Y is a Brownian motion with initial distribution and t = Rt0 2(Xs)ds for t 0. De ne again As = Rs0 2(Yr)dr for s 0, and S = infft 0;Xt2I( )g. Now S = R = inffr 0;Yr2I( )g. Since S is the rst time Xt is in I( ) and since I( ) Z( ), then before time S, X is not in either set, and so, referring back to our argument (5.5), we have for t S A t = Z t 0 2(Yr)dr = Z t 0 1f 2(Xs) > 0gds = t: We also know that As = 1 for s R by Lemma 5.3.1, and so the argument (5.5) implies t R a.s. for all t. So is constant after time S. Now we can once again de ne t = inffs> 0;As >tg for t 0. This shows that is a measurable function of Y. Furthermore, since Xt = Y t, X is a measurable function of Y. Since Y is a Brownian motion with initial distribution , we know the distribution of Y. Since we can do the same thing for any solution X, they all must have distributions determined by . This proves uniqueness in law. 2 40 Bibliography [1] Daley, D.J. and Vere-Jones D. (2008). An Introduction to the Theory of Point Processes, Vol. I & II. Springer, NY. [2] Ikeda, N. and Watanabe S. (1989). Stochastic Di erential Equations and Di u- sion Processes, 2nd ed. North-Holland, Amsterdam. [3] Kallenberg, O. (2002). Foundations of Moderen Probability, 2nd ed. Springer, NY. [4] Karatzas, I. and Shreve S. (1991). Brownian Motion and Stochastic Calculus, 2nd ed. Springer, NY. 41