Mohammed Jahirul IslamQ. M. Jonathan WuMajid AhmadiM.A. Sid-Ahmed
Probability theory is the framework for making decision under uncertainty. In classification, Bayes' rule is used to calculate the probabilities of the classes and it is a big issue how to classify raw data rationally to minimize expected risk. Bayesian theory can roughly be boiled down to one principle: to see the future, one must look at the past. Naive Bayes classifier is one of the mostly used practical Bayesian learning methods. K-nearest neighbor is a supervised learning algorithm where the result of new instance query is classified based on majority of k-nearest neighbor category. The classifiers do not use any model to fit and only based on memory/training data. In this paper, after reviewing Bayesian theory, the naive Bayes classifier and k-nearest neighbor classifier is implemented and applied to a dataset "credit card approval" application. Eventually the performance of these two classifiers is observed on this application in terms of the correct classification and misclassification and how the performance of k-nearest neighbor classifier can be improved by varying the value of k.
Guang-Zheng ZhangKyungsook HanMajid AhmadiMaher A. Sid-Ahmed
Mohammed Jahirul IslamQi WuMajid AhmadiM.A. Sid-Ahmed
V. Saikumar ChalasaniPeter A. Beling