Seminar
Date : June 15, 2022, 3 p.m. - Room :Salle du conseil
Explicability of deep neural model for text classificationNorbert TSOPZE, Professeur - Université de Yaoundé |
Text is one of the most widely used means of communication between people and information storage. Many corpora have been saved on different platforms. Exploiting these corpora could help managers in strategy planning and decision making. An important task in text exploitation is classification, which consists in automatically labelling the text. Deep models have shown promising results in the text classification task but they remain black box for the user. We have developed a deep model (CNN+FCN) for text classification and propose to explain the different model output labels. In the explanation part, we adopt the well-known LRP algorithm and adapt it to the convolution part of the model. We conduct the experiments in many types of text classification, including resume classification, sentiment analysis and question answering. These experiments show the different n-grams responsible for classification. In particular for resume classification, the qualitative analysis allows us to see that many cases of misclassification are due to mislabeling by the user. In order to simplify and reduce the set of selected features, we also propose the sufficient features set and the necessary features set. The main objective of these two sets is to present a concise set of features responsible for classification. Experiments show that these sets are, in most cases, responsible for the output of the model and can help to simplify explanations for the final user.