An efficient primal dual prox method for non-smooth optimization

From MaRDI portal
Publication:2339936

DOI10.1007/S10994-014-5436-1zbMATH Open1311.90188arXiv1201.5283OpenAlexW2034410976MaRDI QIDQ2339936FDOQ2339936


Authors: Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu Edit this on Wikidata


Publication date: 14 April 2015

Published in: Machine Learning (Search for Journal in Brave)

Abstract: We study the non-smooth optimization problems in machine learning, where both the loss function and the regularizer are non-smooth functions. Previous studies on efficient empirical loss minimization assume either a smooth loss function or a strongly convex regularizer, making them unsuitable for non-smooth optimization. We develop a simple yet efficient method for a family of non-smooth optimization problems where the dual form of the loss function is bilinear in primal and dual variables. We cast a non-smooth optimization problem into a minimax optimization problem, and develop a primal dual prox method that solves the minimax optimization problem at a rate of O(1/T) {assuming that the proximal step can be efficiently solved}, significantly faster than a standard subgradient descent method that has an O(1/sqrtT) convergence rate. Our empirical study verifies the efficiency of the proposed method for various non-smooth optimization problems that arise ubiquitously in machine learning by comparing it to the state-of-the-art first order methods.


Full work available at URL: https://arxiv.org/abs/1201.5283




Recommendations




Cites Work


Cited In (13)

Uses Software





This page was built for publication: An efficient primal dual prox method for non-smooth optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2339936)