ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

An Effective Approach to Compute Distance of Uncertain Data by Using K-Nearest Neighbor

Journal: International Journal of Science and Research (IJSR) (Vol.4, No. 4)

Publication Date:

Authors : ; ;

Page : 1916-1920

Keywords : KNN; K-L Divergence; Uncertain Data;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Generally, an object of uncertain data could be presented through a probability distribution. Now days the Clustering uncertain data have been determined as a very important issue. In existing system, technique of Kullback-Leibler Divergence is mostly used by information theory in order to calculate the similarity between certain data. This research work based over calculation of Probability mass function, whereas uncertain data values of discrete and continuous are calculated. With use of the probability mass function, the cases of continuous and discrete distance value are individually measured. The probabilistic ratio of continuous and discrete distance is used to determine the similarity between certain data. For clustering the uncertain data by performing the techniques of Density based clustering. Therefore, the main drawback in the existing system is to selecting the nearest neighbor. To overcome from this type of problem, we introduced algorithm of K-nearest-neighbor in our proposed work to determine the nearest neighbor. The algorithm of K-nearest-neighbor use to calculate the distance among scenarios set and a query scenario in the data set. Here, distance is measure for both the cases of discrete and continuous through the use of using Probability mass function. After that the algorithm of KNN is used to measure the nearest neighbor. Hence, our proposed works produce an effective result and overcomes the drawback of existing technique.

Last modified: 2021-06-30 21:44:39