Convex optimization on Banach spaces (Q285434): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Convex analysis and nonlinear optimization. Theory and examples. / rank
 
Normal rank
Property / cites work
 
Property / cites work: The convex geometry of linear inverse problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy algorithms and \(M\)-term approximation with regard to redundant dictionaries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy-type approximation in Banach spaces and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3998421 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy strategies for convex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy Approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Greedy expansions in convex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sequential greedy approximation for certain convex optimization problems / rank
 
Normal rank

Latest revision as of 00:02, 12 July 2024

scientific article
Language Label Description Also known as
English
Convex optimization on Banach spaces
scientific article

    Statements

    Convex optimization on Banach spaces (English)
    0 references
    19 May 2016
    0 references
    Optimization and specifically convex optimization are important subjects in applied mathematics due to their practical relevances, especially when very many variables are used and may be modified to seek for the optimum of a function. Routine methods to compute such (say) minima (normally only local minima) use first or even second derivatives, that is, gradients or Hesse matrices (for a Newton method), respectively, but in practice the former are often not available and the latter almost never. Therefore, algorithms that do not require derivative evaluations anymore (even if the functions to be minimized are theoretically sufficiently smooth) become more and more interesting. In this paper, such methods are studied in the general context of Banach spaces of functions to be minimized, without evaluation of even gradients. The (greedy) algorithms are presented and also analysed with respect to their convergence speeds.
    0 references
    sparse
    0 references
    optimization
    0 references
    greedy
    0 references
    Banach space
    0 references
    convergence rate
    0 references
    approximate evaluation
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references