Major Advisor: - GetachewMamo(PHD) The classification of natural language texts has gained a growing importance in many real world applications due to its significant implications in relation to crucial tasks, such as Information Retrieval, Question Answering, Text Summarization, and Natural Language Understanding. Text classification can be done in two different ways: manual and automatic classification. In the manual text classification, a human annotator interprets the content of text and categorizes it accordingly. This method usually can provide quality results but it’s time consuming and expensive. While automatic text classification, or the task of automatically assigning semantic categories to natural language text, has become one of the key methods for organizing online information. In this paper we present an automatic text classification using neural network approach, considering the multi-label text classification problem, which is a generalization of traditional two-class or multi-class classification problem. Two different approaches exist for multi label classification. Problem transformation methods try to transform the multi label classification into binary or multiclass classification problems. Algorithm adaptation methods adapt multiclass algorithms so they can be applied directly to the problem. In multi-label classification a set of labels (categories) is given and each training instance is associated with this label-set. The task is to output the appropriate of labels (generally of unknown size) for a given, unknown testing instance. Some improvements to the existing neural network multi-label classification algorithm, named BP-MLL, are proposed here. The modifications concern the form of the global error function used in BP-MLL. The modified classification system is tested in the domain of OBN and FBC news text data set. Experimental results show that proposed modifications visibly improve the performance of the neural network based multi-label classifier. The Micro-F1 value for the best setting are 94%, 84 and 77.4% for the three, six and eight classes respectively and the macro-F1 are 94.4%, 83.9% and 77.4% for the three, six and eight classes respectively. So we can conclude that the classification achieves reasonable performance on three, six and eight classes, where the classification performs considerably better on the small number of classes. Keywords: Multi label text classification, machine learning, neural network, back propagation algorithm, Afaan Oromo news