Regularized least square regression with unbounded and dependent sampling (Q369717): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
(4 intermediate revisions by 4 users not shown)
Property / Wikidata QID
 
Property / Wikidata QID: Q58915348 / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1155/2013/139318 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1994904367 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization networks and support vector machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Shannon sampling. II: Connections to learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning theory estimates via integral operators and their approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning rates of least-square regularized regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimum complexity regression estimation with weakly dependent observations / rank
 
Normal rank
Property / cites work
 
Property / cites work: ONLINE LEARNING WITH MARKOV SAMPLING / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on application of integral operator in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularized least square regression with dependent samples / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixing properties of harris chains and autoregressive processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal rates for the regularized least-squares algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Concentration estimates for learning with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Integral operator approach to learning theory with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal learning rates for least squares regularized regression with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: ERM learning with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Half supervised coefficient regularization for regression learning with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Shannon sampling and function reconstruction from point values / rank
 
Normal rank
Property / cites work
 
Property / cites work: Application of integral operator for regularized least-square regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: On regularization algorithms in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Spectral Algorithms for Supervised Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Least square regression with indefinite kernels and coefficient regularization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5560061 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 20:55, 6 July 2024

scientific article
Language Label Description Also known as
English
Regularized least square regression with unbounded and dependent sampling
scientific article

    Statements

    Regularized least square regression with unbounded and dependent sampling (English)
    0 references
    0 references
    0 references
    19 September 2013
    0 references
    Summary: This paper mainly focuses on the least square regression problem for \(\alpha\)-mixing and \(\phi\)-mixing processes. The standard bound assumption for output data is abandoned and the learning algorithm is implemented with samples drawn from a dependent sampling process with a more general output data condition. Capacity independent error bounds and learning rates are deduced by means of the integral operator technique.
    0 references
    0 references

    Identifiers