Learning general sparse additive models from point queries in high dimensions (Q2007617): Difference between revisions
From MaRDI portal
Removed claim: reviewed by (P1447): Item:Q1397988 |
Changed an Item |
||
Property / reviewed by | |||
Property / reviewed by: Martin D. Buhmann / rank | |||
Normal rank |
Revision as of 14:14, 27 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Learning general sparse additive models from point queries in high dimensions |
scientific article |
Statements
Learning general sparse additive models from point queries in high dimensions (English)
0 references
22 November 2019
0 references
In learning methods, unknown functions \(f\) in multiple variables are to be recovered by a small number of samples taken from a compactum \(G\), say, where the functions are defined. In this paper, a probabilistic recovery of the unknown functions \(f\) is proposed for the compact set \(G\) being the multivariate unit-cube. The setting is so that the unknown function is additive (so-called SPAM, sparse additive models) with respect to its interactions. For this, the \(r\)-wise sets of (\(r\)-variate) interactions between the variables are used to form the sums that yield the unknown function \(f\). The goal is then to recover the sets of unknown interactions to form \(f\), where the recovery is to be understood in a probabilistic sense. The function \(f\) that underlies this approach is \textit{not} needed to be continuously differentiable (only Hölder continuous) as is required in other approaches. In the algorithms it is allowed to take samples within any points in \(G\) and the function values there at the discretion of the user. The \(r\)-wise interactions that are used are allowed for \(r\) \textit{arbitrary}, which is also new in this paper as compared to the known literature. The paper is organised such that the cases \(d=1\) and \(d=2\), uni- and bivariate, are always described first and then the new method is generalised to any dimension \(d\). An extensive discussion of the success of the new algorithms is provided at the end of the article.
0 references
sparse additive models
0 references
sampling
0 references
hash functions
0 references
sparse recovery
0 references