Journal article icon

Journal article

AN L1-Regularized naïve bayes-inspired classifier for discarding redundant and irrelevant predictors

Abstract:
The naïve Bayes model is a simple but often satisfactory supervised classification method. The original naïve Bayes scheme, does, however, have a serious weakness, namely, the harmful effect of redundant predictors. In this paper, we study how to apply a regularization technique to learn a computationally efficient classifier that is inspired by naïve Bayes. The proposed formulation, combined with an L1-penalty, is capable of discarding harmful, redundant predictors. A modification of the LARS algorithm is devised to solve this problem. We tackle both real-valued and discrete predictors, assuring that our method is applicable to a wide range of data. In the experimental section, we empirically study the effect of redundant and irrelevant predictors. We also test the method on a high dimensional data set from the neuroscience field, where there are many more predictors than data cases. Finally, we run the method on a real data set than combines categorical with numeric predictors. Our approach is compared with several naïve Bayes variants and other classification algorithms (SVM and kNN), and is shown to be competitive. © 2013 World Scientific Publishing Company.

Actions


Access Document


Publisher copy:
10.1142/S021821301350019X

Authors



Journal:
International Journal on Artificial Intelligence Tools More from this journal
Volume:
22
Issue:
4
Pages:
1350019-1350019
Publication date:
2013-08-01
DOI:
EISSN:
1793-6349
ISSN:
0218-2130


Language:
English
Keywords:
Pubs id:
pubs:425894
UUID:
uuid:7a83e39d-1b0b-4134-ba60-835032ded338
Local pid:
pubs:425894
Source identifiers:
425894
Deposit date:
2013-11-16

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP