Dimensionality reduction with unsupervised nearest neighbors (Q357134)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Dimensionality reduction with unsupervised nearest neighbors
scientific article

    Statements

    Dimensionality reduction with unsupervised nearest neighbors (English)
    0 references
    0 references
    29 July 2013
    0 references
    The book provides an overview of the author's work on dimensionality reduction using unsupervised nearest neighbors. The book starts with a (somewhat oddly structured) overview of seminal algorithms in machine learning, including nearest-neighbor classification, empirical risk minimization, ensembles, support vector machines, and traditional dimensionality reduction techniques including principal components analysis, self-organizing maps, locally linear embedding, and isomap. The second part of the book covers the author's work on unsupervised nearest neighbors, which is essentially a greedy algorithm that builds up a low-dimensional representation of the data point by point. In doing so, it approximately minimizes some nonlinear least squares error criterion. More advanced versions of the algorithm use evolutionary algorithms to circumvent poor local optima of the objective. The book also presents an approach that allows unsupervised nearest neighbors to deal with missing data. The book presents experiments on a number of data sets, comparing it mostly to traditional techniques such as isomap and locally linear embedding. Whilst these experiments are clearly described, it is somewhat unfortunate that the book does not make comparisons with dimensionality reduction techniques based on stochastic neighbor embedding that have become popular more recently (such as elastic embedding, t-SNE, and NeRV). Also, the book could have given more analysis on the differences between unsupervised nearest neighbors and alternatives (for instance, how do the evolutionary approaches compare to relaxing the objective function and solving the relaxation using standard techniques for nonlinear least squares optimization such as Gauss-Newton?). Taken together, this book is primarily of interest to scholars who want to learn more about Prof. Kramer's research on dimensionality reduction.
    0 references
    dimensionality reduction
    0 references
    manifold learning
    0 references

    Identifiers