Tensorisation of vectors and their efficient convolution (Q647362): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: On the efficient computation of high-dimensional integrals and the approximation by exponential sums / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convolution of hp-functions on locally refined grids / rank
 
Normal rank
Property / cites work
 
Property / cites work: Tensor Spaces and Numerical Tensor Calculus / rank
 
Normal rank
Property / cites work
 
Property / cites work: A new scheme for the tensor representation / rank
 
Normal rank
Property / cites work
 
Property / cites work: \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation of $2^d\times2^d$ Matrices Using Tensor Decomposition / rank
 
Normal rank
Property / cites work
 
Property / cites work: Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions / rank
 
Normal rank

Latest revision as of 15:52, 4 July 2024

scientific article
Language Label Description Also known as
English
Tensorisation of vectors and their efficient convolution
scientific article

    Statements

    Tensorisation of vectors and their efficient convolution (English)
    0 references
    0 references
    23 November 2011
    0 references
    In recent papers the tensorisation of vectors has been discussed. Tensorisation is an interpretation of a usual \(\mathbb{R}^n\) vector as a tensor. For this purpose, the author introduces a tensor space \(V\) and an isomorphism \(\Phi: V\to \mathbb{R}^n\). Certain tensor representations are also introduced; they allow a simple truncation procedure. Black-box tensor approximation methods can be used to reduce the data size of the tensor representation. In particular, if the vector corresponds to a grid function, the resulting data size can become much smaller than \(n\), e.g., \(O(\log n)\ll n\). The author considers operations between vectors, a first example being the scalar product. The crucial point is that the computational work of the operations should be related to the data size of the operands. Assuming a data size \(\ll n\), the cost should also be much smaller than the operation in the standard \(\mathbb{R}^n\) vector format. The main interest of this article concerns the convolution operation \(u:= v*w\) with \(u_i= \sum_k v_k w_{i-k}\). The author discusses the convolution of two vectors which are given via a sparse tensor representation and the result is obtained again in the tensor representation. Furthermore, the cost of the convolution algorithm is related to the operands data sizes. The paper mentions that instead of \(\mathbb{R}^n\) we can also treat finite-dimensional subspaces of function spaces. While \(\mathbb{R}^n\) vectors can be considered as grid values of functions, one can apply the corresponding procedure to univariate functions. Operations like the scalar product or convolution of functions can be performed directly in the tensor format. The paper ends with some generalizations.
    0 references
    0 references
    tensorisation of vectors
    0 references
    convolution
    0 references
    tensor representations
    0 references
    convolution algorithm
    0 references
    scalar product
    0 references

    Identifiers