lightgbm (Q81096): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Added link to MaRDI item.
 
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI software profile / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 18:56, 12 March 2024

Light Gradient Boosting Machine
Language Label Description Also known as
English
lightgbm
Light Gradient Boosting Machine

    Statements

    0 references
    3.3.5
    16 January 2023
    0 references
    3.0.0.2
    1 October 2020
    0 references
    3.0.0
    21 September 2020
    0 references
    3.1.0
    19 November 2020
    0 references
    3.1.1
    8 December 2020
    0 references
    3.2.0
    22 March 2021
    0 references
    3.2.1
    13 April 2021
    0 references
    3.3.0
    9 October 2021
    0 references
    3.3.1
    30 October 2021
    0 references
    3.3.2
    14 January 2022
    0 references
    3.3.3
    10 October 2022
    0 references
    3.3.4
    16 December 2022
    0 references
    4.2.0
    8 December 2023
    0 references
    4.3.0
    18 January 2024
    0 references
    0 references
    18 January 2024
    0 references
    0 references
    Tree based algorithms can be improved by introducing boosting frameworks. 'LightGBM' is one such framework, based on Ke, Guolin et al. (2017) <https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision>. This package offers an R interface to work with it. It is designed to be distributed and efficient with the following advantages: 1. Faster training speed and higher efficiency. 2. Lower memory usage. 3. Better accuracy. 4. Parallel learning supported. 5. Capable of handling large-scale data. In recognition of these advantages, 'LightGBM' has been widely-used in many winning solutions of machine learning competitions. Comparison experiments on public datasets suggest that 'LightGBM' can outperform existing boosting frameworks on both efficiency and accuracy, with significantly lower memory consumption. In addition, parallel experiments suggest that in certain circumstances, 'LightGBM' can achieve a linear speed-up in training time by using multiple machines.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references