An optimal adaptive algorithm for the approximation of concave functions (Q2492698)

From MaRDI portal
scientific article
Language Label Description Also known as
English
An optimal adaptive algorithm for the approximation of concave functions
scientific article

    Statements

    An optimal adaptive algorithm for the approximation of concave functions (English)
    0 references
    0 references
    14 June 2006
    0 references
    Consider a proper concave function \(f:[ 0,1] \to\mathbb R\), normalized so that \(f( 0) =0\) and \(f( 1) =1.\) \ Denote by \(f^{\prime }( \overline{x}) \) an arbitrary supergradient \(\xi \) of \(f\) at \(\overline{x},\) i.e., a supergradient \(\xi \) satisfying: \[ f( x) \leq f( \overline{x}) +\xi ( x-\overline{x} ) \text{ for all }x\in [ 0,1] \] Let \(x_{1},\dots .,x_{n}\) denote a finite sequence of \(n\) distinct points of evaluation, called knots, in \(( 0,1) \) and \(\sigma \) a permutation that reorders the knots and the two endpoints from left to right, i.e., \[ 0=x_{\sigma ( 0) }<x_{\sigma ( 1) }<\dots <x_{\sigma ( n) }<x_{\sigma ( n+1) }=1 \] Denote the under and over approximations of \(f\) over the interval \(( 0,1) ,\) respectively, by \[ \begin{aligned} L( t) &=\min_{i=1,\dots, n+1}\left\{ f( x_{\sigma ( i-1) }) \frac{( t-x_{\sigma ( i) }) }{ x_{\sigma ( i-1) }-x_{\sigma ( i) }}+f( x_{\sigma ( i) }) \frac{( t-x_{\sigma ( i-1) }) }{x_{\sigma ( i) }-x_{\sigma ( i-1) }}\right\}, \\ U( t) &=\min_{i=0,\dots, n+1}\{ f( x_{i}) +f^{\prime }( x_{i}) ( t-x_{i}) \}. \end{aligned} \] This yields the approximation error term \(\int_{0}^{1}( U( t) -L( t) ) \,dt.\) Using dynamic programming, the paper derives a procedure for minimizing the error term through a sequential procedure of knot selection which is proved to be optimal in the sense that it minimizes the error term in the worst case. The procedure is a deterministic adaptive algorithm in the sense that the selection of the current knot is dependent on the locations of the previous knots, i.e., the choice of \(x_{i}\) may depend upon the previous information \(x_{1},f( x_{1}) ,f^{\prime }( x_{1}) ,\dots ,x_{i-1},f( x_{i-1}) ,f^{\prime }( x_{i-1}) .\) The analysis seeks to improve upon previous works by \(( 1) \) obtaining both the optimal convergence rate and optimal selection rules for each \(n\) and \(( 2) \) getting parameter free results, i.e., no a priori information about the function to be approximated is required.
    0 references
    0 references
    0 references
    0 references
    0 references
    concave functions
    0 references
    piecewise linear
    0 references
    adaptive algorithm
    0 references
    dynamic programming
    0 references
    0 references