Tensorisation of vectors and their efficient convolution (Q647362): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
ReferenceBot (talk | contribs)
Changed an Item
 
(3 intermediate revisions by 3 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s00211-011-0393-0 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2022917013 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the efficient computation of high-dimensional integrals and the approximation by exponential sums / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convolution of hp-functions on locally refined grids / rank
 
Normal rank
Property / cites work
 
Property / cites work: Tensor Spaces and Numerical Tensor Calculus / rank
 
Normal rank
Property / cites work
 
Property / cites work: A new scheme for the tensor representation / rank
 
Normal rank
Property / cites work
 
Property / cites work: \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation of $2^d\times2^d$ Matrices Using Tensor Decomposition / rank
 
Normal rank
Property / cites work
 
Property / cites work: Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 15:52, 4 July 2024

scientific article
Language Label Description Also known as
English
Tensorisation of vectors and their efficient convolution
scientific article

    Statements

    Tensorisation of vectors and their efficient convolution (English)
    0 references
    0 references
    23 November 2011
    0 references
    In recent papers the tensorisation of vectors has been discussed. Tensorisation is an interpretation of a usual \(\mathbb{R}^n\) vector as a tensor. For this purpose, the author introduces a tensor space \(V\) and an isomorphism \(\Phi: V\to \mathbb{R}^n\). Certain tensor representations are also introduced; they allow a simple truncation procedure. Black-box tensor approximation methods can be used to reduce the data size of the tensor representation. In particular, if the vector corresponds to a grid function, the resulting data size can become much smaller than \(n\), e.g., \(O(\log n)\ll n\). The author considers operations between vectors, a first example being the scalar product. The crucial point is that the computational work of the operations should be related to the data size of the operands. Assuming a data size \(\ll n\), the cost should also be much smaller than the operation in the standard \(\mathbb{R}^n\) vector format. The main interest of this article concerns the convolution operation \(u:= v*w\) with \(u_i= \sum_k v_k w_{i-k}\). The author discusses the convolution of two vectors which are given via a sparse tensor representation and the result is obtained again in the tensor representation. Furthermore, the cost of the convolution algorithm is related to the operands data sizes. The paper mentions that instead of \(\mathbb{R}^n\) we can also treat finite-dimensional subspaces of function spaces. While \(\mathbb{R}^n\) vectors can be considered as grid values of functions, one can apply the corresponding procedure to univariate functions. Operations like the scalar product or convolution of functions can be performed directly in the tensor format. The paper ends with some generalizations.
    0 references
    0 references
    tensorisation of vectors
    0 references
    convolution
    0 references
    tensor representations
    0 references
    convolution algorithm
    0 references
    scalar product
    0 references

    Identifiers