A family of non-Gaussian martingales with Gaussian marginals (Q2478414)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A family of non-Gaussian martingales with Gaussian marginals
scientific article

    Statements

    A family of non-Gaussian martingales with Gaussian marginals (English)
    0 references
    0 references
    0 references
    0 references
    28 March 2008
    0 references
    1. Introduction. In an attempt to uphold two desirable properties of Brownian motion (having the Markov property as well as Gaussian marginals) the authors propose a rich family of Markov processes which are martingales and have Gaussian marginals. They also exibit some properties of this family, which has the potential to find applications in many fields, including finance. After giving a brief servey of related results and ideas in the literature the authors note that their approach is different to all ``of the above'' and produce a rich family of processes rather than a single process. They are also saying that the richness of the family has the potential to allow for the imposition of specifications other than that of prescribed marginal distributions and that although their method can be extended to include other types of marginal distributions, they choose to focus solely on the Gaussian case. Finally they comment that all existing approaches yield discontinuous processes (barring the Brownian motion itself), and the question of the existence of a non-gaussian continuous martingale with Gaussian marginals remains open. The starting point of their construction is an observation that for any triple \((R,Y,\xi)\) of independent r.v. such that \(R\) takes values in \((0,1],\;\xi\) is standard Gaussian \(N(0,1)\) and \(Y=N(0,\alpha^2)\), the random variable \[ Z=\sigma(\sqrt{R}Y+\alpha\sqrt{1-R}\xi)=N(0,\sigma^2\alpha^2).\eqno(1) \] Moreover, 1) the joint distribution of \((Y,Z)\) is not bivariate Gaussian or is such pair if and only if \(R\) is nonrandom, 2) the martingale property (\(Y=E(Z|\,Y)\)) of the two-period process \((Y,Z)\) holds if and only if \(E\sqrt{R}=1/\sigma\) and 3) the conditional distribution of \(Z\) given \(Y=y\) is \[ F_{Z|\,Y=y}(dz)=\mathbb{P}[R=1]\varepsilon_{\sigma y}(dz)+ \mathbb{E}[\phi(\sigma \sqrt{R}y,\alpha^2\sigma^2(1-R),z)1_{R<1}]dz,\eqno(2) \] where \(\varepsilon_x\) is the Dirac measure at \(x\) and \(\phi(\mu,\sigma^2,.)\) denotes the density of the Gaussian distribution with mean \(\mu\) and variance \(\sigma^2.\) 2. The construction of the family. Theoretical aspects. It is just the above construction of a two-step process is extended to that of a continuous time Markov process, or a family of Markov martingales, \(X_t,\) the marginals of which are Gaussian with mean zero and variance \(t.\) This process \(X_t\) is constructed as an inhomogeneous Markov process with transition function given by (2) and admits the following almost sure representation \[ X_t=\sqrt{\frac ts}\left(\sqrt{R_{s,t}}X_s+ \sqrt{s}\sqrt{1-R_{s,t}}\xi_{s,t}\right)\eqno(3) \] (compare with (1)), where \(X_s,\;R_{s,t}\) and \(\xi_{s,t}\) are assumed to be independent, \(R_{s,t}\) is assumed to take values in \((0,1]\) and to have a distribution that depends on \((s,t)\) only through \(\sqrt{t/s}\) for which \(E[\sqrt{R_{s,t}}]=\sqrt{s/t}.\) Finally, \(\xi_{s,t}\) is assumed to be a standard gaussian \(N(0,1).\) The main result of the paper is stated as theorem 2.5, the proof of which is broken into several propositions. But to make it's formulation understandable one has to make oneself familiar with some concepts and assertions. First of all, for a family of transition functions given by (2) to define a (Markov) process the authors require that the distribution of \(R_{s,t}\) generates a so-called log-convolution semigroup. Definition 2.1. The family of distributions \((G_{\sigma})_{\sigma\geq 1}\) on \((0,\infty)\) is a log-convolution semigroup if \(G_1=\varepsilon_1\) and the distribution of the product of any two independent random variables with distributions \(G_{\sigma}\) and \(G_{\tau}\) is \(G_{\sigma\tau}.\) After that they cite result (without the proof which is straitforward and left to the reader) showing the relationship that exists between log-convolution and convolution semigroups, recalling the notion of convolution semigroup \(K=(K_p)_{p\geq 0}\) (\(K_0=\varepsilon_0,\;K_p * K_q=K_{p+q}\)). Proposition 2.2. Let \((G_{\sigma})_{\sigma\geq 1}\) be a log-convolution semigroup on \((0,1]\) and, for \(\sigma\geq 1\) let \(R_{\sigma}\) be a r.v. with distribution \(G_{\sigma}.\) If \(K_p,\;p\geq 0,\) denotes the distribution of \(V_p=-\ln R_{e^p},\) then \((K_p)_{p\geq 0}\) is a convolution semigroup. Conversely, let \(K_p,\;p\geq 0,\) be a convolution semigroup and, for \(p\geq 0,\) let \(V_p\) be a r.v. with distribution \(K_p.\) If \((G_{\sigma})_{\sigma\geq 1}\) denotes the distribution of \(R_{\sigma}=e^{-V_{\ln \sigma}},\) then \((G_{\sigma})_{\sigma\geq 1}\) is a log-convolution semigroup. In the next proposition, the authors check that the Chapman-Kolmogorov equation is satisfied, thus, guarantying the existance of the process \(X_t.\) Proposition 2.3. Define, for \(x\in \mathbb{R},\;s>0\) and \(t=\sigma^2 s\geq s,\;P_{s,t}(x,dy)\) as \[ P_{0,t}(x,dy)=\frac 1{\sqrt{2\pi t}}\exp\left(-\frac{(y-x)^2}{2t}\right)dy, \] \[ P_{s,t}(x,dy)=y(\sigma)\varepsilon_{\sigma x}(dy)+\left[\int_{(0,1)} \frac 1{\sqrt{2\pi t(1-r)}}\exp\left(-\frac{(y-\sigma \sqrt{r}x)^2}{2t(1-r)}\right) G_{\sigma}(dr)\right]dy,\eqno(4) \] where \(y(\sigma)=G_{\sigma}(\{1\}).\) If \((G_{\sigma})_{\sigma\geq 1}\) is a log-convolution semigroup on \((0,1],\) then the Chapman-Kolmogorov equation holds, i.e. for any \(u>t>s>0\) and any \(x\) \[ \int P_{s,t}(x,dy)P_{t,u}(y,dz)=P_{s,u}(x,dz). \] The convolution semigroup \(K\) in proposition 2.2 defines the subordinator (process with positive, independent and stationary increments, i.e. an increasing Levy process). The following proposition explains the last needed notion \(\psi\) of the so-called Laplace exponent of the log-convolution semigroup \((G_{\sigma})_{\sigma\geq 1}:\) \[ \psi(\lambda)=\beta\lambda+\int_0^{\infty}(1-e^{-\lambda x})\nu(dx);\eqno(5) \] as observed earlier, the requirement that \(X\) be a martingale translates into condition \(\mathbb{E}[\sqrt{R}]=1/\sigma\) which in turn, taking \(\lambda=1/2\) in (6) reduces to \(\psi(1/2)=1.\) It's proof uses above observation and is a straitforward application of the classical Levy-Khinchin Theorem on subodinators. Proposition 2.4 Let \((G_{\sigma})_{\sigma\geq 1}\) be a log-convolution semigroup on \((0,1].\) Define, for \(R_{\sigma}\) with distribution \(G_{\sigma},\) \(U_{\sigma}=-\ln R_{\sigma},\) and let \(L_{\sigma}(\lambda)=\mathbb{E}[e^{-\lambda U_{\sigma}}]\) (\(=\mathbb{E}[e^{\lambda \ln R_{\sigma}}]=\mathbb{E}[(R_{\sigma})^{\lambda}]\)) be the Laplace transform of the (positive) r.v. \(U_{\sigma}.\) Then for any \(\sigma\geq 1,\;U_{\sigma}\) is infinitely divisible. Moreover, \[ \ln L_{\sigma}(\lambda)=-\left[\beta\lambda+\int_0^{\infty}(1-e^{-\lambda x})\nu(dx) \right]\ln \sigma,\eqno(6) \] where the Levy measure \(\nu(dx)\) satisfies \(\nu(\{0\})=0\) and \(\int_0^{\infty}(1\wedge x)\nu(dx)<\infty.\) Conversely, any function \(L_{\sigma}\) of the form (6) is the \(\lambda\)-moments of a log-convolution semigroup \((G_{\sigma})_{\sigma\geq 1}.\) Now one may finalize the description of construction of the process \(X_t.\) Starting from a function \(\psi\) of the form (5) which satisfies \(\psi(1/2)=1,\) the authors construct the family \(G_{\sigma}\) and the transition probability function \(P_{s,t}(x,dy)\) given in (4). Theorem 2.5. Let the family \((G_{\sigma})_{\sigma\geq 1}\) form a log-convolution semigroup with Laplace exponent \(\psi(\lambda)\) from (5). Assume that \(\psi(1/2)=1.\) Then the coordinate process starting at zero, hereby denoted \((X_t)_{t\geq 0},\) is a Markov martingale with respect to its natural filtration \((\mathcal{F}_t)_{t\geq 0}\) and with probabilities \(P_{s,t}(x,dy)\) given in (4). Furthermore, the marginal distributions of \(X_t\) are Gaussian with mean zero and variance \(t\) and, for \(0<s<t\) the process \(X_t\) admits the representation (3), where \(R_{s,t}\) and \(\xi_{s,t}\) are independent of each other and of \(\mathcal{F}_s,\) \(R_{s,t}\) has distribution \(G_{\sqrt{t/s}}\) and \(\xi_{s,t}\) is standard gaussian. 3. Path properties. As a martingale \(X_t\) admits a cadlag version. In the sequel one assumes that \(X_t\) itself is cadlag. One proves three properties. Theorem 3.1. The process \(X_t\) is continuous in probability \[ \forall c>0, \lim_{s\to t} \mathbb{P}[|X_t-X_s|>c]=0. \] Theorem 3.3. The (predictable) quadratic variation of \(X_t\) is \(\langle X,X\rangle_t=\delta t+(1-\delta)\int_0^t(X_s^2/s)ds,\) where \(\delta=\psi(1)/2.\) Furthermore, it can be obtained as a limit, \[ \langle X,X\rangle_t=\lim_{n\to \infty}\sum_{k=0}^{n-1} \mathbb{E}[(X_{t_{k+1}}-X_{t_k})^2|\,X_{t_k}] \] in \(L^2,\) where \(t_0<t_1<\cdots<t_n\) is a subdivision of \([0,t].\) The next result states that the only continuous process that can be constructed in the way described in section 2 is the Brownian motion. Theorem 3.4. The process \(X_t\) is quasi-left-continuous. It is continuous if and only if \(G_{\sigma}\equiv \varepsilon_{\sigma^{-2}}\) (i.e., \(R_{s,t}\equiv s/t),\) in which case \(X_t\) is a standard Brownian motion. \(X_t\) being quasi-left-continuous, in partiqular, \(X_t\) does not have any fixed points of discontinuity. One of the aims of the constructions given in the following section is to describe the jumps of the process \(X_t.\) 4. Explicit constructions. Before engaging in the explicit construction of the processes outlined in the previous sections, the authors motivate that they fall into one of two classes according to whether or not \(\gamma(\sigma)=G_{\sigma}(\{1\})\) is nil, uniformly in \(\sigma >1.\) And then consider 4 different situations. 4.1. The case \(\gamma(\sigma)>0.\) The processes thus obtained are piecewise deterministic pure jump processes in the sence that between any two consecutive jumps, the process behaves according to a deterministic function. Examples of such processes include the case where \(G_{\sigma}\) is an inverse log-Poisson distribution. And such interpretation of these processes may be drawn from the form of the infinitesimal generator \(A_s\) of \(X_t\) on the set of \(C_0^2\)-functions \(f(x)\) given in proposition 4.1. Thus the process \(X\) starts off as a Brownian motion (\(A_0f(x)=(1/2)f''(x)\)) and, when in \(x\) at time \(s\), drifts at the rate of \(x/2s,\) and jumps at the rate of \(-\gamma'(1)/2s.\) The size of the jump from \(x\) has density (it is also pointed out) which allows to say that while in positive territory, \(X_t\) continuously drifts upwards and has jumps that tend to be negative, and in negative region, the reverse occurs. The domain of \(A_s\) turns out to be extended to include functions that do not vanish at infinity, such as \(f(x)=x^2.\) And by Theorem 3.3, \(g_s(x)=\delta+(1-\delta)x^2/s\) solves the martigale problem for \(f(x)=x^2.\) From the observation that the process \(X\) does not jump between times \(s\) and \(t\) if and only if \(X_u=\sqrt{u/s}X_s\) immediately follows the next Proposition 4.2. Let \(T_s\) denote the first jump time after \(s.\) Then \(\mathbb{P}[T_s>t]=\gamma(\sigma)\) for any \(t>s,\) where \(\sigma=\sqrt{t/s}.\) 4.2 The Poisson case \(\gamma(\sigma)=\sigma^{-c}.\) In this particular case the assumptions of Proposition 4.1 are clearly satisfied and the special infinitesimal generator of \(X_t\) (pointed out) allows to say that it jumps at the rate of \(c/2s\) with a size distributed as a Gaussian r.v. with mean \(-x/c\) and variance \(s(1-e^{-1}).\) Figure 4.1 shows a simulation of a path of such a process. Furthermore, the law of the first jump time after \(s\) is given by \[ \mathbb{P}[T_s>t]=\gamma (\sqrt{t/s})=s^{c/2}t^{-c/2}. \] In other words, \(T_s\) is Pareto distributed with location parameter \(s\) and scale parameter \(c/2\sim 1.27.\) In particular, \(\mathbb{E}[T_s]=\frac{cs}{c-2},\;\mathbb{E}[T_s^2]=\infty.\) 4.3. The case \(\gamma(\sigma)=0.\) This time the infinitesimal generator is given for functions of a specific type, which include polinomials. There are three assertions (Propositions 4.3, 4.5 and Lemma 4.4). But for specific cases, such as the gamma case, the generator is given for much wider class of functions. 4.4. The gamma case \(\gamma(\sigma)=0.\) Here \(\beta=0,\) \(\nu(dx)=ax^{-1}e^{-bx}dx\) with \(a=1/\ln(1+(1/2b))\) and \(\psi(\lambda)=a\ln(1+(\lambda/b)),\) that is, \(U_{\sigma}\) has a gamma distribution with density \[ h_{\sigma}(u)=\frac{b^{a\ln \sigma}}{\Gamma(a\ln \sigma)} u^{a\ln \sigma-1}e^{-bu},\;\;\;u>0, \] and \(R_{\sigma}\) has an inverse log-gamma distribution with density \[ g_{\sigma}(r)=\frac{b^{a\ln \sigma}}{\Gamma(a\ln \sigma)} (-\ln r)^{a\ln \sigma-1}r^{b-1},\;\;\;0<r<1. \] Figure 4.2 shows a simulation of a path of such a process. In this case it is possible to compute the generator for a much wider class of functions. And they are defined and given in Proposition 4.6.
    0 references
    0 references
    0 references
    0 references
    0 references