On the hardness of module learning with errors with short distributions (Q2677644)

From MaRDI portal
scientific article
Language Label Description Also known as
English
On the hardness of module learning with errors with short distributions
scientific article

    Statements

    On the hardness of module learning with errors with short distributions (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    5 January 2023
    0 references
    The paper provides new results about the Learning With Errors (LWE) problem. Given two positive integers \(d\) and \(q\), and a secret vector \(s\in Z^d_q\), an LWE\(_{d,q,\psi}\) sample is defined as \((a, b = q^{-1}\langle a, s\rangle + e\bmod \mathbb{Z})\), where \(a\) is sampled from the uniform distribution over \(Z^d_q\), and \(e\) an error term sampled from a distribution \(\psi\) over \(\mathbb{R}\). The search version of LWE asks to recover the secret \(s\) given arbitrarily many samples of the LWE distribution. Its decision counterpart asks to distinguish between LWE samples and the same number of samples drawn from the uniform distribution over \(Z^d_q\times T\), where the torus is defined by \(\mathbb{T} = \mathbb{R}/\mathbb{Z}\). The authors focus in particular on the Module Learning With Errors (M-LWE) problem similar to LWE where the set of integers \(\mathbb{Z}\) is replaced by the ring of algebraic integers \(R\) of a number field \(K\). The integer \(n\) denotes the degree of the number field, \(d\) denotes the module rank, and \(q\) the modulus. Further, let \(\psi\) be a distribution on the field tensor product \(K_\mathbb{R} = K\otimes _\mathbb{Q}\mathbb{R}\), and let \(s\in R^d_q\) be a secret vector, where \(R_q = R/q R\). An M-LWE\(_{n,d,q,\psi}\) sample is given by \((a, q^{-1}\langle a,s\rangle + e\bmod R)\), where \(a\) is uniform in \(R^d_q\), and \(e\) is sampled from \(\psi\). The search version asks to find \(s\) given arbitrarily many samples, while the decision version asks to distinguish such samples from uniformly random ones over \(R^d_q\times\mathbb{T}\), where the torus is \(\mathbb{T} = K_\mathbb{R}/R\). In this paper, the authors provide three main contributions to the hardness of M-LWE with small secret and/or error, i.e., with coefficients bounded by some positive integer \(\eta\ll q\). The first one is the computational hardness of \(\eta\)-M-LWE. The next one is pseudorandomness of \(\eta\)-M-LWE. The authors provide a more involved proof of hardness for the decision version of \(\eta\)-M-LWE through a reduction from M-LWE to \(\eta\)-M-LWE. This reduction applies to the decision versions, and it also slightly improves the noise rate of the reduction as it no longer depends on the number of samples \(m\), as opposed to the first contribution. The third contribution is the one-ways of M-LWE with small error. It focuses on the hardness of M-SLWE when the error distribution is uniform over \(\eta\)-bounded elements instead of Gaussian.
    0 references
    0 references
    lattice-based cryptography
    0 references
    module learning with errors
    0 references
    short distributions
    0 references
    bounded secret
    0 references
    bounded error
    0 references
    0 references
    0 references
    0 references