Turkish Journal of Electrical Engineering and Computer Sciences




Topic models, such as latent Dirichlet allocation (LDA), allow us to categorize each document based on the topics. It builds a document as a mixture of topics and a topic is modeled as a probability distribution over words. However, the key drawback of the traditional topic model is that it cannot handle the semantic knowledge hidden in the documents. Therefore, semantically related, coherent and meaningful topics cannot be obtained. However, semantic inference plays a significant role in topic modeling as well as in other text mining tasks. In this paper, in order to tackle this problem, a novel NET-LDA model is proposed. In NET-LDA, semantically similar documents are merged to bring all semantically related words together and the obtained semantic similarity knowledge is incorporated into the model with a new adaptive semantic parameter. The motivation of the study is to reveal the impact of semantic knowledge in the topic model researches. Therefore, in a given corpus, different documents may contain different words but may speak about the same topic. For such documents to be correctly identified, the feature space of the documents must be elaborated with more powerful features. In order to accomplish this goal, the semantic space of documents is constructed with concepts and named entities. Two datasets in the English and Turkish languages and 12 different domains have been evaluated to show the independence of the model from both language and domain. The proposed NET-LDA, compared to the baselines, outperforms in terms of topic coherence, F-measure, and qualitative evaluation.


Aspect extraction, cooccurence relation, latent Dirichlet allocation (LDA), semantic similarity, topic modeling

First Page


Last Page