Lipschitzian norms and functional inequalities for birth-death processes (Q6192463)

From MaRDI portal
scientific article; zbMATH DE number 7815400
Language Label Description Also known as
English
Lipschitzian norms and functional inequalities for birth-death processes
scientific article; zbMATH DE number 7815400

    Statements

    Lipschitzian norms and functional inequalities for birth-death processes (English)
    0 references
    0 references
    11 March 2024
    0 references
    This paper treats a birth-death process with generator \({\mathcal L}\) and reversible invariant probability measure \(\pi\). The author identifies explicitly the Lipschitzian norm of the solution of the Poisson equation \(- {\mathcal L} G = g - \pi(g)\) for \(\vert g \vert \leqslant \varphi\). This leads to some transportation-information inequalities, concentration inequalities and Cheeger-type isoperimetric inequalities. Lastly, several examples are provided to illustrated the results. More precisely, let \(( X_t )_{ t \geq 0 }\) be a birth-death process on \({\mathbb N} = \{ 0, 1, 2, \dots \}\) with birth rates \( ( b_i )_{ i \in {\mathbb N} }\) and death rates \(( a_i )_{ i \in {\mathbb N} }\), i.e., its generator \({\mathcal L}\) is given for any real function \(G\) on \({\mathbb N}\) by \[ {\mathcal L} G(i) = b_i ( G( i+1) - G(i) ) + a_i ( G( i-1) - G(i)), \tag{1}\] where \(b_i\) and \(a_i\) are positive for any \(i \geq 1\), with furthermore \(b_0 > 0\) and \(a_0 = 0\). For any real function \(G\), \(G( -1)\) is identified as \(G(0)\) by convention. In this paper, the author always assumes that the process is positive recurrent, i.e., \[ \sum_{ n \geq 0} \mu_n \sum_{ i \geq n} ( \mu_i b_i )^{-1} = \infty \qquad \text{and} \qquad C := \sum_{n=0}^{+ \infty} \mu_n < + \infty, \tag{2}\] where \(\mu\) given by \[ \mu_0 = 1, \qquad \mu_n = \frac{ b_0 b_1 \cdots b_{n-1} }{ a_1 a_2 \cdots a_n}, \qquad n \geq 1 \tag{3}\] is an invariant measure of the process. Define the normalized probability measure \(\pi\) of \(\mu\) by \(\pi_n = \mu_n / C\) for any \(n \geq 0\), which is actually the reversible invariant probability measure of the process. Given an increasing function \(\rho : {\mathbb N} \to {\mathbb R}\), define \(d_{\rho} (i,j)\) \(=\) \(\vert \rho(i) - \rho(j) \vert\) a metric on \({\mathbb N }\) with respect to \(\rho\). We say that a function \(G\) on \({\mathbb N}\) is Lipschitz with respect to \(\rho\) (or \(\rho\)-Lipschitz), if \[ \begin{aligned} \Vert G \Vert_{ Lip ( \rho) } &:= \sup_{ i \not= j} \frac{ \vert G(j) - G(i) \vert}{ \vert \rho(j) - \rho(i) \vert} \\ & = \sup_{ i \geq 0} \frac{ \vert G(i+1) - G(i) \vert}{ \rho( i+1) - \rho(i) } < + \infty. \end{aligned} \tag{4} \] The space of all \(\rho\)-Lipschitz functions is denoted by \(C_{ Lip( \rho) }\). Throughout this paper, we assume that \(\rho \in L^1(\pi)\) and denote by \(( C_{ Lip( \rho)}^0, \Vert \cdot \Vert_{ Lip( \rho)} )\) the space of all \(\rho\)-Lipschitz functions with \(\pi(G)\) \(:=\) \(\int G d \pi\) \(= 0\). Consider the Poisson equation \(- {\mathcal L} G = g\). Recall the usual Lipschitzian norm of \(( - {\mathcal L} )^{-1}\) on \(C_{ Lip( \rho)}^0\): \[ \Vert ( - {\mathcal L} )^{-1} \Vert_{ Lip( \rho )} := \sup \{ \Vert ( - {\mathcal L} )^{-1} g \Vert_{ Lip( \rho)} : \quad g \in C_{ Lip(\rho)}^0, \quad \Vert g \Vert_{ Lip( \rho)} \leqslant 1 \}. \tag{5}\] By definition, we say that \({\mathcal L}\) has a spectral gap in \(C_{ Lip( \rho)}^0\) if \(0\) is an isolated eigenvalue of \(- {\mathcal L}\) in \(C_{ Lip( \rho)}^0\), or equivalently \(( \- {\mathcal L} )^{-1}\) : \( C_{ Lip( \rho)}^0\) \(\mapsto\) \(C_{ Lip( \rho)}^0\) is bounded. Let \(\lambda_1\) be the spectral gap of \(- {\mathcal L}\) in \(L^2( \pi)\), i.e., the infimum of the spectrum of \(- { \mathcal L}\) in \(L^2(\pi)\). In this paper, the author forcuses on the observed function \(g\) which is bounded by some nonnegative function \(\varphi\) but not \(\rho\)-Lipschitz continuous. The aim of this paper is to get the concentration inequalities for the empirical mean \(1 / t \int_0^t g(X_s) ds\) through non-Lipschitz observables. Indeed, these concentration inequalities are immediately the consequences of the estimation on \(\Vert G \Vert_{ Lip( \rho)}\) via the martingale decomposition or transportation-information inequalities. For this paurpose the author first calculates directly \[ \sup_{ \vert g \vert \leqslant \varphi} \Vert ( - {\mathcal L} )^{-1} ( g - \pi(g) ) \Vert_{ Lip( \rho)} \tag{6}\] since the solution of Poisson equation \(- {\mathcal L} G = g\) for birth-death process can be solved explicitly. Then the author can get the transportation-infomation inequalities by the boundedness of \(\Gamma( \rho)\) or the Lyapunov test function approach. Especially, by taking \[ d_{\varphi} (i,j) := ( \varphi(i) + \varphi(j) ) \mathbf{1}_{ i \not= j}, \tag{7}\] the Wasserstein distance \(W_{1, d_{\varphi} } (\nu, \mu)\) becomes the weighted total variation distance \(\Vert \varphi ( \nu - \mu ) \Vert_{TV}\). The author also obtains some Cheeger-type isoperimetric inequalities, which can lead to transportation-information inequalities and concentration inequalities under some additional assumptions. Let us enumarate below the main results of the present article. We define the carré-du-champ operator \(\Gamma\) of birth-death process with respect to \({\mathcal L}\) as follows: \[ \Gamma(f,g) := \frac{1}{2} [ {\mathcal L} (f g) - {\mathcal L}f g - f {\mathcal L}g ], \tag{8}\] where \(f\) and \(g\) are any functions on \({\mathbb N}\). Note that \(\Gamma (f,f) = \Gamma(f)\). The Dirichlet form associated with \({\mathcal L}\) is given by any functions \(f\) and \(g\) on \({\mathbb N}\) \[ {\mathcal E} (f,g) := \langle - {\mathcal L} f, g \rangle_{ L^2( \pi)} = \sum_{k=0}^{ + \infty} \pi_k \Gamma(f,g) (k), \tag{9}\] with \({\mathcal E} (f,f) = {\mathcal E}(f)\). Let \({\mathcal M}_1\) be the space of probability measures on \({\mathbb N}\). For all \(\mu, \nu \in {\mathcal M}_1\), the Fisher-Donsker-Varadhan's information of \(\nu\) with respect to \(\mu\) is defined by \[ I( \nu \vert \mu ) := \begin{cases} {\mathcal E} ( \sqrt{f}, \sqrt{f} ), &\text{if} \quad \nu = f \mu, \\ + \infty, &\quad \text{otherwise}. \end{cases} \tag{10}\] The Wasserstein distance between \(\nu\) and \(\mu\) with respect to a given metric \(d\) on \({\mathbb N}\) is defined by \[ W_{1, d}(\nu, \mu ) = \inf_{\gamma} \sum_{i=0}^{+ \infty} \sum_{j=0}^{ + \infty} \gamma_{ij} \cdot d(i, j), \tag{11}\] where \(\gamma\) runs over all couplings of \(\nu\) and \(\mu\), i.e., all probability measures \(\gamma\) on \({\mathbb N}^2\) with marginal distribution \(\nu\) and \(\mu\). Theorem 1. Let \({\mathcal A}\) be the set of all real increasing functions \(\rho\) on \({\mathbb N}\) such that \(\rho \in L^1( \pi)\). Let \(\rho \in {\mathcal A}\) and \(\varphi\) be a nonnegative function in \(L^1( \pi)\) such that \(c(\varphi, \rho) < + \infty\). If there exists a positive constant \(M\) such that \(\Gamma(\rho) (k) \leqslant M\), \(\forall k \in {\mathbb N}\), then for all \(\nu \in {\mathcal M}_1\), we have the following transportation-information inequality \[ W_{1, d_{\varphi} } (\nu, \pi) \leqslant 2 c(\varphi, \rho) \sqrt{ M \cdot I(\nu \vert \pi) }, \tag{12}\] where \(d_{\varphi}\) is the metric on \({\mathbb N}\) defined by (8). Instead of using the boundedness of \(\Gamma\), we also have the following generalized transportation-information inequality by Lyapunov test function condition. Theorem 2. Let \(\rho\) and \(\varphi\) be the same as in Theorem 1. Assume that the following Lyapunov condition holds: for some \(\delta > 0\) there exists a function \(V\) : \({\mathbb N} \to [1, \infty)\), \(V \in L^1( \pi)\) such that for any \(k \in {\mathbb N}\) \[ ( 1 + \delta ) a_k ( \rho (k-1) - \rho (k) )^2 + \left( 1 + \frac{1}{\delta} \right) b_k ( \rho(k+1) - \rho(k) )^2 \leqslant - \alpha \frac{ {\mathcal L} V }{V} (k) + \beta, \tag{13}\] where \(\alpha\) and \(\beta\) are two positive constants. Then for all \(\nu \in {\mathcal M}_1\), we have \[ W_{1, d_{\varphi} } ( \nu, \pi) = \Vert \varphi ( \nu - \pi) \Vert_{TV} \leqslant c( \varphi, \rho) \sqrt{ \alpha I^2(\nu \vert \pi) + \beta I( \nu \vert \pi) }. \tag{14}\] Let \(c_{ch}\) be the best constant in the following kind of Cheeger-type isoperimetric inequality for birth-death process, i.e., for any \(f \in L^1( \pi)\) \[ \sum_{ k=0}^{+ \infty} \pi_k \vert f(k) - \pi(f) \vert \leqslant c_{ch} \sum_{ k=0}^{+ \infty} \pi_k b_k \vert f( k+1) - f(k) \vert. \tag{15}\] Then we have \(c_{ch} < c( 1, \rho_0)\). Theorem 3. If \(c_{ch} < + \infty\) and there exists a positive constant \(M\) such that if \(a_k + b_k \leqslant M\), \((\forall k \in {\mathbb N})\), then for all \(\nu \in {\mathcal M}_1\) we have the following transportation-infomation inequality \[ W_{i, d_1} ( \nu, \pi) = \Vert \nu - \pi \Vert_{TV} \leqslant c_{ch} \sqrt{ 2 M} \sqrt{ I( \nu \vert \pi ) }. \tag{16}\] Or equivalently, for every function \(g\) on \({\mathbb N}\) such that \(\vert g \vert \leqslant 1\), we have for any initial measure \(\nu \ll \pi\) and \(t, r > 0\), \[ {\mathbb P}_{\nu} \left( \frac{1}{t} \int_-^t g(X_s) ds - \pi(g) > r \right) \leqslant \left\Vert \frac{ d \nu}{ d \pi} \right\Vert_{ L^2(\pi)} \exp \left\{ - \frac{ t r^2}{ 2 M c_{ch}^2 } \right\}. \tag{17}\] For other related works, see e.g. [\textit{A. Guillin} et al., Probab. Theory Relat. Fields 144, No. 3--4, 669--695 (2009; Zbl 1169.60304)] for transportation-information inequalities for Markov processes, [\textit{W. Liu} and \textit{Y. Ma}, Ann. Inst. Henri Poincaré, Probab. Stat. 45, No. 1, 58--69 (2009; Zbl 1172.60023)] for spectral gap and convex concentration inequalities for birth-death processes, and [\textit{Y. Ma} et al., Electron. Commun. Probab. 16, 600--613 (2011; Zbl 1254.60027)] for transportation-infomation inequalities for continuous Gibbs measures.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    birth-death process
    0 references
    Poisson equation
    0 references
    transportation-information inequality
    0 references
    concentration inequality
    0 references
    Cheeger-type isoperimtric inequality
    0 references
    0 references
    0 references