Partial fairness in secure two-party computation (Q421046)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Partial fairness in secure two-party computation
scientific article

    Statements

    Partial fairness in secure two-party computation (English)
    0 references
    0 references
    0 references
    23 May 2012
    0 references
    This paper considers the goal of fairness in secure two-party computation. In this setting, fairness means that either both parties learn the output or neither does. The strongest formalization of this is known to be unattainable in general, so the authors here consider a relaxation to partial fairness. The notion of partial fairness they present maintains the real/ideal world paradigm of the full definition and merely relaxes the notion of simulation to allow a \(1/p\) chance of distinguishing the real and ideal worlds, where \(p\) is a polynomial. This work provides a broad understanding of how to achieve this definition for a wide class of functionalities (those with polynomial size domains or ranges) and also some impossibility results demonstrating the tightness of these feasibility results. More precisely, the authors consider randomized functionalities \(\mathcal{F} = \{ f_n : X_n \times Y_n \rightarrow Z_n\}\). Their first result establishes that as long as one of \(X_n, Y_n\) is of polynomial size, the functionality can be computed securely (with abort) and with partial fairness, assuming the existence of enhanced trapdoor permutations. Later, they establish the tightness of this result by demonstrating that there is a functionality where \(X_n, Y_n\) are of super-polynomial size, \(Z_n\) is of constant size, and no protocol for this functionality can simultaneously achieve security with abort and partial fairness even at the level of \(1/p = 1/5\). However, the authors prove that privacy can be achieved alongside partial fairness whenever \(Z_n\) has polynomial size, again assuming the existence of enhanced trapdoor permutations. This result is also shown to be tight in the sense that when the sizes of \(X_n, Y_n, Z_n\) are super-polynomial, there exist functionalities that cannot be computed with partial fairness even for the modest value of \(1/p = 1/3\). The main technique used to prove the feasibility results is to design protocols that proceed in (polynomially many) rounds, where the values learned by the parties in each round eventually shift from being randomly distributed to be the true values. The challenge then is to keep the party who learns the true value first from recognizing it immediately and hence halting participation before the other party has learned the true value. It is intuitive that polynomial size domains and ranges help with this, since if there are only polynomially many values, there is a significant chance that the real value can also occur in some of the randomly sampled rounds. The paper concludes by proposing some intriguing further questions. It is very natural to wonder if the tradeoff between the round complexity and the partial fairness parameter can be improved, and how these results could be extended to more than two parties. The authors also ask to what extent it may be possible to circumvent their negative examples by restricting to a more structured class of functionalities with large domains and ranges.
    0 references
    0 references
    0 references
    fairness
    0 references
    secure two-party computation
    0 references
    randomized functionalities
    0 references
    privacy
    0 references
    0 references