? 基于RNN的文本关键字强调模型用于情感分类
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | Help
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2017, Vol. 32 Issue (4) :785-795    DOI: 10.1007/s11390-017-1759-2
Special Issue on Deep Learning << Previous Articles | Next Articles >>
基于RNN的文本关键字强调模型用于情感分类
Fei Hu1,2, Student Member, CCF, Li Li1,*, Senior Member, CCF, Member, ACM, Zi-Li Zhang1, Distinguished Member, CCF, Member, ACM, Jing-Yuan Wang1, Student Member, CCF, Xiao-Fei Xu1, Student Member, CCF
1 College of Computer and Information Science, Southwest University, Chongqing 400715, China;
2 Network Centre, Chongqing University of Education, Chongqing 400065, China
Emphasizing Essential Words for Sentiment Classification Based on Recurrent Neural Networks
Fei Hu1,2, Student Member, CCF, Li Li1,*, Senior Member, CCF, Member, ACM, Zi-Li Zhang1, Distinguished Member, CCF, Member, ACM, Jing-Yuan Wang1, Student Member, CCF, Xiao-Fei Xu1, Student Member, CCF
1 College of Computer and Information Science, Southwest University, Chongqing 400715, China;
2 Network Centre, Chongqing University of Education, Chongqing 400065, China

摘要
参考文献
相关文章
Download: [PDF 1122KB]  
摘要 随着网上交流方式的盛行,人们越来越容易通过互联网获取到各种文本资讯。这里面有很多文本内容短小且不符合语法规范,比如论坛帖子、微博和电影评论等,计算机很难对这一类文本进行分析。例如,传统的BOW概率模型就很难处理短文本。因为短文本的概率统计信息不足,而这是BOW模型在处理这一类模型时所必须的。近年来,不少研究者开始关注文本中单词之间的依存关系,用来弥补文本中单词统计信息不足的缺点,从而更好的实现文本语义挖掘。LSTM就是这样一种模型,可以挖掘文本中单词之间的依存关系,能够“记住”这些依存关系,即使两个较远距离的单词之间的关系也能够记住。同时,我们通过研究人类阅读文本的方式,发现人们会对文本中特点的一些单词记忆深刻,这种“区别记忆”的方式可以帮助人们记住文本中关键内容,从而更好地理解文本语义。在本文中,我们提出一种基于LSTM的关键字记忆模型,该模型能够对模拟人类的“区别记忆”模式,从而更好地理解文本语义。为了验证效果,我们把这个模型分别用于两个数据集的情感分类任务:IMDB和SemEval2016。实验结果证明我们的模型效果显著。在情感分类的准确度方面,比基准LSTM模型提升了1到2个百分点,特别是在处理短文本方面把非RNN模型远远抛在后面。同时我们也把“区别记忆”这一方式用于GRU模型(LSTM的一种变种),同样取得了不俗的效果。
关键词短文本理解   长短记忆模型   门循环单元模型   情感分类   深度学习     
Abstract: With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets:IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1% 2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.
Keywordsshort text understanding   long short-term memory (LSTM)   gated recurrent unit (GRU)   sentiment classification   deep learning     
Received 2016-12-20;
本文基金:

The work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission of China under Grant No. KJ1501405, the National Natural Science Foundation of China under Grant No. 61170192, and the Chongqing Science and Technology Commission (CSTC) under Grant No. cstc2015gjhz40002.

通讯作者: Li Li     Email: lily@swu.edu.cn
About author: Fei Hu is a Ph.D. candidate in the College of Computer and Information Science, Southwest University, Chongqing. His research interests include deep learning technologies and natural language processing.
引用本文:   
Fei Hu, Li Li, Zi-Li Zhang, Jing-Yuan Wang, Xiao-Fei Xu.基于RNN的文本关键字强调模型用于情感分类[J]  Journal of Computer Science and Technology , 2017,V32(4): 785-795
Fei Hu, Li Li, Zi-Li Zhang, Jing-Yuan Wang, Xiao-Fei Xu.Emphasizing Essential Words for Sentiment Classification Based on Recurrent Neural Networks[J]  Journal of Computer Science and Technology, 2017,V32(4): 785-795
链接本文:  
http://jcst.ict.ac.cn:8080/jcst/CN/10.1007/s11390-017-1759-2
Copyright 2010 by Journal of Computer Science and Technology