Hypercube percolation (Q520736)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Hypercube percolation
scientific article

    Statements

    Hypercube percolation (English)
    0 references
    0 references
    0 references
    0 references
    5 April 2017
    0 references
    The authors' objective is to show that large percolation clusters behave in some sense like uniform random sets, and to resolve a conjecture of \textit{C. Borgs} et al. [Combinatorica 26, No. 4, 395--410 (2006; Zbl 1121.05108)]. \textit{P. Erdős} and \textit{J. Spencer} [Comput. Math. Appl. 5, 33--39 (1979; Zbl 0399.05041)] proposed a combinatorial model for the percolation on the Hamming hypercube \([0,1]^m\). The non-trivial geometry makes the combinatorial ``subgraph count'' technique unavailable, and the critical probability for the phase transition is significantly large than \(\frac{1}{m-1}\) making the method of stochastic domination very limited. Borgs et al. [loc. cit.] suggested that the precise location \(p_c\) of the phase transition for the connected component \({\mathcal C}(0)\) containing the origin is the unique solution to the equation, for \(\lambda \in (0, 1)\), \[ E_{p_c}|\mathcal C (0)| = \lambda 2^{m/3}. \] The main result of this paper is Theorem 1.1 given by the following, for \(p=p_c(\lambda) (1+\varepsilon)\) with \(\lambda \in (0, \infty)\) and \(\varepsilon = \varepsilon (m) o(1)>>2^{-m/3}\), {\parindent=0.7cm\begin{itemize}\item[1.] \(E| \mathcal C (0)| = (4+ o(1))\varepsilon^2 2^m\); \item[2.] \(\frac{| \mathcal C_1| }{2\varepsilon 2^m} \to 1\) in probability sense; \item[3.] \(\frac{| \mathcal C_2| }{\varepsilon 2^m} \to 0\) in probability sense; \end{itemize}} where \(\mathcal C_i\) is the \(i\)-th largest component. The main idea is to reduce down to two large clusters and to replace the appeal to the hypercube's isoperimetric inequality, and understand the precise behavior of the non-backtracking random walk on the hypercube. The proof of Theorem 1.1 combines this analysis and ideas from statistical physics. \textit{M. Ajtai} et al. [Combinatorica 2, 1--7 (1982; Zbl 0489.05053)] first solved the problem on phase transitions with component of size order \(2^m\), and showed that the retention probability of an edge is scaled as \(p=c/m\) for a fixed constant \(c>0\), where the largest component has size of order \(m\) if \(c<1\), and has size linear in \(2^m\) with high probability if \(c>1\). \textit{B. Bollobás} et al. [Random Struct. Algorithms 3, No. 1, 55--90 (1992; Zbl 0779.05045)] made the first improvement to show that \(| \mathcal C_1| = (2+o(1))\varepsilon (m)2^m\) for \(p=\frac{1+\varepsilon (m)}{m-1}\) and \(\varepsilon (m)=o(1) \geq \frac{60 \log^3 m}{m}\). Note that \(G(n, p)\) is obtained from the complete graph by retaining each edge of the complete graph on \(n\) vertices with probability \(p\) and erasing it otherwise independently for all edges. \textit{P. Erdős} and \textit{A. Rényi} [Publ. Math. Inst. Hung. Acad. Sci., Ser. A 5, 17--61 (1960; Zbl 0103.16301)] discovered this model exhibiting phase transition when \(p\) is scaled like \(p=c/n\) (\(|{\mathcal C}_1| =\Theta (\log n)\) if \(c<1\); \(| \mathcal C_1| =\Theta (n)\) if \(c>1\).). \textit{B. Bollobás} [Trans. Am. Math. Soc. 286, 257--274 (1984; Zbl 0579.05046)] and \textit{T. Łuczak} [Random Struct. Algorithms 1, No. 3, 287--310 (1990; Zbl 0745.05048)] revealed an intricate picture of the phase transition for \(c \sim 1\). Theorem 1.1 proves that the emergence of the giant component lies just above the scaling critical window, and the concentration in the supercritical regime in the subcritical phase, and leaves the supercritical phase open. Section 1 collects the main results in this paper, and recalls random subgraphs of transitive graphs and sprinkling. Section 2 gives an overview of the proof, stating the main results upon the proof are given in later sections. A result of \textit{G. Kozma} and the second author [Invent. Math. 178, No. 3, 635--654 (2009; Zbl 1180.82094)] provides the survival probability and expected ball sizes at \(p_c(\lambda)\) in Theorem 2.1, in order to consider percolation performed at different values of \(p\). With only the finite triangle condition, Theorem 2.2 provides the bounds on the cluster tail. Theorem 2.2 is reminiscent of the fact that a branching process with Poisson progeny distribution of mean \(1+\varepsilon\) has survival probability \(2 \varepsilon (1+ O(\varepsilon))\). The proof of Theorem 2.2 is given by upper and lower bounds on the cluster tail in Proposition A.1 and A.2 in Appendix, and by using the differential inequalities for the magnetization of the sharpness of the percolation phase transition for both subcritical and supercritical cases. Subsection 2.3 focuses on many boundary edges between large clusters, and Theorem 2.4 is the key result of the paper to solve the conjecture. Those edges are used in the sprinkling. Theorem 2.4 gives the many boundary edges shared by the most large clusters, and majority of these pairs have clusters that share many boundary edges between them. This provides sprinkling argument in the proof of Theorem 1.3 in Section 7.1. The sketch proof of Theorem 2.4 is outlined in subsection 2.5. Conditioned on survival, the random variable \(S_{2r+r_0}(x, y)\) for vertices \(x\) and \(y\) is not concentrated, hence it is hard to prove that it is large with high probability, where \[ S_{2r+r_0}(x, y) = |\{ (u, u')\}\in E(G): d_{G_p}(x, u)\leq 2r+r_0,d_{G_p}(y, u')\leq 2r+r_0, \] \[ | B_u(2r+r_0)|\cdot | B_{u'}(2r+r_0)| \leq e^{40M}\varepsilon^{-2} (E| B(r_0)| )^2 \}|. \] Intuitively, this non-concentration occurs because the first generations of the process have a strong and lasting effect on the future of the population. The proof goes by the conditional variable \(S_{2r+r_0}(x, y)\) concentrated around the value \(| \partial B_x(r)| \cdot| \partial B_y (r)| V^{-1} m (E| B(r_0)|)^2\), and the rest is to elaborate the estimate via crucial application of the uniform connection bounding in the assumption of Theorem 1.3. Section 3 gives the ``off'' method and BK-Reimer inequality in subsection 3.1, expectations and survival probabilities related to the random variable \(| \partial B(r)|\) for different \(p\)'s and supercritical survival probability in subsection 3.2, number of pairs surviving disjointly and most pairs have almost independent disjoint survival probabilities are studied in subsection 3.3. The non-backtracking random walk on an undirected graph is a Markov chain with transition matrix on the state space of directed edges, this includes bounds on long and short connection probabilities and bound on various triangle and square diagrams. This makes crucial use of the geometry of the graph and the behavior of the random walk on it (assumptions of Theorem 1.3). Subsection 3.5 shows that long percolation paths are asymptotically equally likely to end at any vertex in the graph \(G\) and uniform connection bounds off sets. Theorem 1.3 (a) is proved in subsection 3.6, the section 3 ends with short supercritical triangles and long supercritical triangles. Section 4 devotes to the expected volume of intrinsic balls and their boundaries at various radii in both the critical and supercritical phases. Theorem 4.1 on expected boundary size \(G(r)\leq C\) for any integer \(r\) and the reverse inequality to Theorem 4.1 are proved in subsection 4.1. Subsection 4.2 extends the volume estimates to the supercritical regime in subsection 4.2 in Theorem 4.5 and its proof. Section 5 gives the intrinsic metric regularity Theorem 5.1 used in Subsection 2.5. A vertex \(a\) is pivotal for an increasing event \(E\) whenever \(E\) occurs but does not occur in the modified configuration in which we close all the edges touching \(a\). The proof of Theorem 5.1 follows from lower estimates of the event with pivotal touching balls in Lemma 5.2--5.4, and from the second moment calculation and Chebyshev's inequality and Markov's inequality. Section 6 devotes to prove Theorem 2.4 that many closed edges exist between most large clusters. Section 7 gives the proofs of the main theorems, the proof of Theorem 1.3 is given in Subsection 7.1 by Theorem 2.2 and Lemma 2.3 for the first largest component, and the number of partitions as well as the number of edges for each a partition through the analysis of \(S_{2r+r_0}(x, y)\) in Section 2. The proof of Theorem 1.1 is given in subsection 7.2 by first showing \(t_{\mathrm{mix}}=O(m\log m)\) in Lemma 7.1 for the \(t_{\mathrm{mix}}\) in Theorem 1.3, then by splitting into two cases \(\varepsilon (m) \leq 1/m^2\) as in Theorem 1.3 and \(\varepsilon (m) \geq 1/m^2\) as done in [Borgs et al., loc. cit.] via performing the classical sprinkling argument. The proof of Theorem 1.4 is presented in subsection 7.3. It is very interesting to pose some open problems in Section 8 for further technical and detailed analysis. For instance, the first open problem is to ask a central limit theorem for \(| \mathcal C_1|\) above the critical window for percolation on the hypercube since the paper under review actually proves a law of large numbers for \(| \mathcal C_1|\) in this case.
    0 references
    0 references
    0 references
    0 references
    0 references
    hypercube
    0 references
    percolation
    0 references
    random graph
    0 references
    critical behavior
    0 references
    subcritical phase
    0 references
    supercritical phase
    0 references
    mean-field results
    0 references
    scaling window
    0 references
    birth of the giant component
    0 references
    cluster size
    0 references
    BK-Reimer inequality
    0 references
    survival probability
    0 references
    short and long connection probabilities
    0 references
    short and long supercritical triangles
    0 references
    intrinsic metric regularity
    0 references
    non-backtracking random walk
    0 references
    Markov inequality
    0 references
    mixing time
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references