• Derwin Suhartono
  • Aryo Pradipta Gema
  • Suhendro Winton
  • Theodorus David
  • Mohamad Ivan Fanany
  • Aniati Murni Arymurthy



seq2seq, argumentation mining, deep learning, negative log likelihood, BLEU


The goal of this research is to generate a motion-aware claim using a deep neural network approach: sequence-to-sequence learning method. A motion-aware claim is a sentence that is logically correlated to the motion while preserving its grammatical structure. Our proposed model generates a motion-aware claim in a form of one sentence and takes motion as the input also in a form of one sentence. We use a publicly available argumentation mining dataset that contains annotated motion and claim data. In this research, we propose a novel approach for argument generation by employing a scheduled sampling strategy to make the model converge faster. The BLEU scores and questionnaire are used to quantitatively assess the model. Our best model achieves 0.175 ± 0.088 BLEU-4 score. Based on the questionnaire results, we can also derive a conclusion that it is hard for the respondents to differentiate between the human-made and the model-generated arguments.


T. Govier, A Practical Study of Argument, Cengage Learning, 2013.

S. Parsons, N. Oren, Reed, C. and F. Cerutti, Computational Models of Argument, Ios Press, 2014.

R. Levy, Y. Bilu, D. Hershcovich, E. Aharoni, and N. Slonim, “Context dependent claim detection,” Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers COLING’2014, 2014, pp. 1489-1500.

R. Rinott, L. Dankin, C.A. Perez, M.M. Khapra, E. Aharoni, and N. Slonim, “Show me your evidence – an automatic method for context dependent evidence detection,” Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015, pp. 440–450.

R. Bar-Haim, I. Bhattacharya, F. Dinuzzo, A. Saha, and N. Slonim, “Stance classification of context-dependent claims,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, vol. 1, 2017, pp. 251-261.

D. Suhartono, A.P. Gema, S. Winton, T. David, M.I. Fanany, and A.M. Arymurthy, “Hierarchical attention network with XGBoost for recognizing insufficiently supported argument,” In: S. Phon-Amnuaisuk, S-P. Ang, & S-Y. Lee (Eds.), Multi-disciplinary Trends in Artificial Intelligence - 11th International Workshop, MIWAI 2017, Proceedings (pp. 174-188). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10607 LNAI), 2017, pp. 174-188. Springer Verlag.

A.P. Gema, S. Winton, T. David, D. Suhartono, M. Shodiq, and W. Gazali, “It takes two to tango: Modification of Siamese long short term memory network with attention mechanism in recognizing argumentative relations in persuasive essay,” Procedia Computer Science, vol. 116, pp. 449-459, 2017.

M. Sato, K. Yanai, T. Miyoshi, T. Yanase, M. Iwayama, Q. Sun, and Y. Niwa, “End-to-end argument generation system in debating,” Proceedings of the ACL-IJCNLP 2015 System Demonstrations, 2015, pp. 109-1142015.

T. Mikolov, M. Karafiát, L. Burget, J. Cernocký, and S. Khudanpur, “Recurrent neural network based language model,” Interspeech, vol. 2, pp. 3, 2010.

W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent neural network regularization,” arXiv preprint arXiv:1409.2329, 2014.

Y. Kim, Y. Jernite, D. Sontag, and A.M. Rush, “Character-aware neural language models,” AAAI, pp. 2741-2749, 2016.

Y. Zhang, Z. Gan, K. Fan, Z. Chen, R. Henao, D. Shen, and L. Carin, “Adversarial feature matching for text generation,” In: D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 4006-4015, Sydney, Australia, August, 2017.

K. Cho, B.V. Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.

I. Sutskever, O. Vinyals, and Q.V. Le, “Sequence to sequence learning with neural networks,” Advances in Neural Information Processing Systems, pp. 3104-3112, 2014.

M. Ranzato, S. Chopra, M. Auli, and W. Zaremba, “Sequence level training with recurrent neural networks,” arXiv preprint arXiv:1511.06732, 2015.

S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence prediction with recurrent neural networks,” Advances in Neural Information Processing Systems, pp. 1171-1179, 2015.

K. Cho, B.V. Merrienboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.

D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.

Habernal, and I. Gurevych, “Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional LSTM,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol. 1, 2016, pp. 1589-1599.

C. Stab and I. Gurevych, “Recognizing insufficiently supported arguments in argumentative essays,” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, vol. 1, 2017, pp. 980-990.

R. Manurung, G. Ritchie, and H. Thompson, “Using genetic algorithms to create meaningful poetic text,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 24, issue 1, pp. 43-64, 2012.

G. Ritchie, R. Manurung, H. Pain, A. Waller, R. Black, and D. Omara, “A practical application of computational humour,” Proceedings of the 4th International Joint Conference on Computational Creativity, 2007, pp. 91-98.

Y. Zhang, Z. Gan, and L. Carin, Generating text via adversarial training, Proceedings of the Workshop on Adversarial Training, NIPS 2016, Barcelona, Spain, 2016, pp. 1-6.

Z. Hu, Z. Yang, R. Salakhutdinov, and E.P. Xing, “On unifying deep generative models,” arXiv preprint arXiv:1706.00550, 2017.

Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, issue 1, pp. 2096-2030, 2016.

D.P. Kingma, and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.

C. Napoles, M. Gormley, and B.V. Durme, “Annotated gigaword,” Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, Association for Computational Linguistics, 2012, pp. 95-100.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT press, Cambridge, 2016.

J. Li, M.T. Luong, and D. Jurafsky, “A hierarchical neural autoencoder for paragraphs and documents,” arXiv preprint arXiv:1506.01057, 2015.




How to Cite

Suhartono, D., Gema, A. P., Winton, S., David, T., Fanany, M. I., & Arymurthy, A. M. (2020). SEQUENCE-TO-SEQUENCE LEARNING FOR MOTION-AWARE CLAIM GENERATION. International Journal of Computing, 19(4), 620-628.