Implementations of algorithms for data analysis based on the rough set theory (RST) and the fuzzy rough set theory (FRST). We not only provide implementations for the basic concepts of RST and FRST but also popular algorithms that derive from those theories. The methods included in the package can be divided into several categories based on their functionality: discretization, feature selection, instance selection, rule induction and classification based on nearest neighbors. RST was introduced by Zdzisław Pawlak in 1982 as a sophisticated mathematical tool to model and process imprecise or incomplete information. By using the indiscernibility relation for objects/instances, RST does not require additional parameters to analyze the data. FRST is an extension of RST. The FRST combines concepts of vagueness and indiscernibility that are expressed with fuzzy sets (as proposed by Zadeh, in 1965) and RST.
- NEC: a nested equivalence class-based dependency calculation approach for fast feature selection using rough set theory
- Applications of rough sets in big data analysis: an overview
- Recent fuzzy generalisations of rough sets theory: a systematic review and methodological critique of the literature
- Distributed approach for computing rough set approximations of big incomplete information systems
- ST-Hadoop
- Object similarity measures and Pawlak's indiscernibility on decision tables
- LERS
- ROSE
- ROSETTA
- Wumpus
- 4eMka2
- ReproZip
- DIXER
- Rseslib
- jMAF
- A novel attribute reduction approach for multi-label data based on rough set theory
- Construction of fuzzy similarity relations based on the similarity quality measure
- Parallel attribute reduction in dominance-based neighborhood rough set
This page was built for software: RoughSets