cmfrec (Q65690)

From MaRDI portal
Collective Matrix Factorization for Recommender Systems
Language Label Description Also known as
English
cmfrec
Collective Matrix Factorization for Recommender Systems

    Statements

    0 references
    3.5.1-1
    11 April 2023
    0 references
    2.3.0
    23 November 2020
    0 references
    2.3.2
    23 November 2020
    0 references
    2.4.1
    6 January 2021
    0 references
    2.4.2
    10 January 2021
    0 references
    2.4.5
    1 March 2021
    0 references
    3.1.0
    20 May 2021
    0 references
    3.1.2
    28 June 2021
    0 references
    3.2.1
    29 July 2021
    0 references
    3.2.2-1
    26 September 2021
    0 references
    3.2.2-2
    7 November 2021
    0 references
    3.2.2
    30 July 2021
    0 references
    3.3.0
    3 January 2022
    0 references
    3.3.1
    5 January 2022
    0 references
    3.4.1
    10 February 2022
    0 references
    3.4.2
    10 February 2022
    0 references
    3.4.3-2
    25 October 2022
    0 references
    3.4.3
    9 July 2022
    0 references
    3.5.0
    26 November 2022
    0 references
    3.5.1-2
    28 November 2023
    0 references
    3.5.1
    8 March 2023
    0 references
    3.5.1-3
    9 December 2023
    0 references
    0 references
    0 references
    9 December 2023
    0 references
    Collective matrix factorization (a.k.a. multi-view or multi-way factorization,Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparseor having many missing values) matrix 'X' as the product of two low-dimensional matrices,optionally aided with secondary information matrices about rows and/or columns of 'X',which are also factorized using the same latent components.The intended usage is for recommender systems, dimensionality reduction, and missing value imputation.Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can producedifferent factorizations such as the weighted 'implicit-feedback' model (Hu, Koren, Volinsky,(2008) <doi:10.1109/ICDM.2008.22>), the 'weighted-lambda-regularization' model,(Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>),or the enhanced model with 'implicit features' (Rendle, Zhang,Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-basedprocedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009)<doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver(Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negativecoordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>),providing efficient methods for sparse and dense data, and mixtures thereof.Supports L1 and L2 regularization in the main models,offers alternative most-popular and content-based models, and implements functionalityfor cold-start recommendations and imputation of 2D data.
    0 references
    0 references