Turkish Journal of Electrical Engineering and Computer Sciences
DOI
10.3906/elk-1907-13
Abstract
In this study, we have proposed an alternative approach for sentence modeling problem. The difficulty of the choice of answer, the semantically related questions and the lack of syntactic closeness of the answers give rise to the difficulty of selecting the answer. The deep learning field has recently achieved a pivotal success in semantic analysis, machine translation, and text summaries. The essence of this work, inspired by the human orthographic processing mechanism and using multiple convolution filters with pre-rendered 2-Dimension (2D) representations of sentences, input or output size is to learn the basic features of the language without concerns. For this reason, the semantic relations in the sentence structure are learned by the convolutional variational auto-encoders first, and then the question and answer spaces learned by the auto-encoders are linked with proposed intermediate models. We have benchmarked five variations of our proposed model, which is based on Variational Auto-Encoder with multiple latent spaces and able to achieve lower error rates than the baseline model, which is the base Convolutional LSTM.
Keywords
Convolutional networks, bi-gram, n-gram, question answering problem, deep learning, variational autoencoder, sentence modeling
First Page
1135
Last Page
1148
Recommended Citation
CEYLAN, ALİ MERT and AYTAÇ, VECDİ
(2020)
"Convolutional auto encoders for sentence representation generation,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 28:
No.
2, Article 37.
https://doi.org/10.3906/elk-1907-13
Available at:
https://journals.tubitak.gov.tr/elektrik/vol28/iss2/37
Included in
Computer Engineering Commons, Computer Sciences Commons, Electrical and Computer Engineering Commons