Sustained space complexity
From MaRDI portal
Publication:1648824
Abstract: Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs, as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using steps and memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs memory for steps. Our main technical contribution is the construction is a family of DAGs on nodes with constant indegree with high "sustained-space complexity", meaning that any parallel black-pebbling strategy requires pebbles for at least steps.
Recommendations
- Sustained space and cumulative complexity trade-offs for data-dependent memory-hard functions
- Data-independent memory hard functions: new attacks and stronger constructions
- Efficiently computing data-independent memory-hard functions
- Depth-robust graphs and their cumulative memory complexity
- High Parallel Complexity Graphs and Memory-Hard Functions
Cited in
(19)- Parallelizable delegation from LWE
- Efficiently computing data-independent memory-hard functions
- On the complexity of \textsf{scrypt} and proofs of space in the parallel random oracle model
- PURED: a unified framework for resource-hard functions
- Memory-hard functions from cryptographic primitives
- Depth-robust graphs and their cumulative memory complexity
- Scrypt is maximally memory-hard
- Verifiable capacity-bound functions: a new primitive from Kolmogorov complexity. (Revisiting space-based security in the adaptive setting)
- On the computational complexity of minimal cumulative cost graph pebbling
- Static-memory-hard functions, and modeling the cost of space vs. time
- SPARKs: succinct parallelizable arguments of knowledge
- Bandwidth-Hard Functions: Reductions and Lower Bounds
- Trapdoor memory-hard functions
- Memory-hard puzzles in the standard model with applications to memory-hard functions and resource-bounded locally decodable codes
- Proofs of Catalytic Space
- scientific article; zbMATH DE number 7376033 (Why is no real title available?)
- The parallel reversible pebbling game: analyzing the post-quantum security of iMHFs
- Sustained space and cumulative complexity trade-offs for data-dependent memory-hard functions
- Disproving the conjectures from ``On the complexity of \textsf{scrypt} and proofs of space in the parallel random oracle model
This page was built for publication: Sustained space complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1648824)