On the Hardness of Compressing Weights

From MaRDI portal
Publication:6168482

DOI10.4230/LIPICS.MFCS.2021.64arXiv2107.02554OpenAlexW3193618752MaRDI QIDQ6168482FDOQ6168482


Authors: Bart M. P. Jansen, Shivesh Kumar Roy, Michał Włodarczyk Edit this on Wikidata


Publication date: 8 August 2023

Abstract: We investigate computational problems involving large weights through the lens of kernelization, which is a framework of polynomial-time preprocessing aimed at compressing the instance size. Our main focus is the weighted Clique problem, where we are given an edge-weighted graph and the goal is to detect a clique of total weight equal to a prescribed value. We show that the weighted variant, parameterized by the number of vertices n, is significantly harder than the unweighted problem by presenting an O(n3varepsilon) lower bound on the size of the kernel, under the assumption that NP otsubseteq coNP/poly. This lower bound is essentially tight: we show that we can reduce the problem to the case with weights bounded by 2O(n), which yields a randomized kernel of O(n3) bits. We generalize these results to the weighted d-Uniform Hyperclique problem, Subset Sum, and weighted variants of Boolean Constraint Satisfaction Problems (CSPs). We also study weighted minimization problems and show that weight compression is easier when we only want to preserve the collection of optimal solutions. Namely, we show that for node-weighted Vertex Cover on bipartite graphs it is possible to maintain the set of optimal solutions using integer weights from the range [1,n], but if we want to maintain the ordering of the weights of all inclusion-minimal solutions, then weights as large as 2Omega(n) are necessary.


Full work available at URL: https://arxiv.org/abs/2107.02554












This page was built for publication: On the Hardness of Compressing Weights

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6168482)