On Realizing Rough Set Algorithms with Apache Spark
Proceeding: Third International Conference on Data Mining, Internet Computing, and Big Data (BigData2016)Publication Date: 2016-7-21
Authors : Kuo-Min Huang Hsin-Yu Chen Kan-Lin Hsiung;
Page : 111-112
Keywords : Data Mining; Granular Computing; Rough Sets; Apache Spark; Hadoop MapReduce;
Abstract
In this note, in line with the emerging granular computing paradigm for huge datasets, we consider a Spark implementation of rough set theory, which is a powerful mathematical tool to deal with vagueness and uncertainty in imperfect data.
Other Latest Articles
- Big Data Analysis with Query Optimization Results in Secure Cloud
- Degree Distribution of Real Biological Networks
- Application of Stochastic Simulations of Biological Networks Under Different Impulsive Scenarios
- Proposed Platform IPHCS for Predictive Analysis in Healthcare System by Using Big Data Techniques
- Fuzzy Neighborhood Grid-Based DBSCAN Using Representative Points
Last modified: 2016-07-21 23:50:04