[1] Wang B, Liu B, Wang X, Sun C, Zhang D. Deep learning approaches to semantic relevance modeling for Chinese question-answer pairs. ACM Transactions on Asian Language Information Processing (TALIP), 2011, 10(4):Article No. 21.[2] Hu H, Liu B, Wang B, Liu M, Wang X. Multimodal DBN for predicting high-quality answers in cQA portals. In Proc. the 51st Annual Meeting of the Association for Computational Linguistics, August 2013, pp.843-847.[3] Lu Z, Li H. A deep architecture for matching short texts. In Proc. the 27th Advances in Neural Information Processing Systems, December 2013, pp.1367-1375.[4] Hinton G, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7):1527-1554.[5] Salakhutdinov R, Hinton G. Deep Boltzmann machines. In Proc.the 12th International Conference on Artificial Intelligence and Statistics, April 2009, pp.448-455.[6] Iyyer M, Boyd-Graber J L, Boyd J, Claudino L, Socher R, Daumé Ⅲ H. A neural network for factoid question answering over paragraphs. In Proc. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), October 2014, pp.633-644.[7] Bordes A, Chopra S, Weston J. Question answering with subgraph embeddings. In Proc. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), October 2014, pp.615-620[8] Blei D M, Ng A Y, Jordan M I. Latent Dirichlet allocation. The Journal of Machine Learning Research, 2003, 3:993-1022.[9] Yih W T, Chang M W, Meek C, Pastusiak A. Question answering using enhanced lexical semantic models. In Proc. the 51st Annual Meeting of the Association for Computational Linguistics, August 2013, pp.1744-1753.[10] Zhang J, Salwen J, Glass M, Gliozzo A. Word semantic representations using Bayesian probabilistic tensor factorization. In Proc. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), October 2014, pp.1522-1531.[11] Lei T, Xin Y, Zhang Y, Barzilay R, Jaakkola T. Low-rank tensors for scoring dependency structures. In Proc. the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014, pp.1381-1391.[12] Pei W, Ge T, Chang B. Max margin tensor neural network for Chinese word segmentation. In Proc. the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014, pp.293-303.[13] Chang K W, Yih W T, Yang B, Meek C. Typed tensor decomposition of knowledge bases for relation extraction. In Proc. the 2014 Conference on Empirical Methods in Natural Language Processing, October 2014, pp.1568-1579.[14] Yan Z, Zhou J. A new approach to answerer recommendation in community question answering services. In Proc. the 34th Advances in Information Retrieval, April 2012, pp.121-132.[15] Qiu X, Tian L, Huang X. Latent semantic tensor indexing for community-based question answering. In Proc. the 51st Annual Meeting of the Association for Computational Linguistics, August 2013, pp.434-439.[16] Zhou X, Hu B, Chen Q et al. Answer sequence learning with neural networks for answer selection in community question answering. arXiv:1506.06490, 2015. http://arxiv.org/abs/1506.06490,June 2016.[17] Qiu X, Huang X. Convolutional neural tensor network architecture for community-based question answering. In Proc. the 24th International Joint Conference on Artificial Intelligence, July 2015, pp.1305-1311.[18] Mansur M, Pei W, Chang B. Feature-based neural language model and Chinese word segmentation. In Proc. the 6th International Joint Conference on Natural Language Processing, October 2013, pp.1271-1277.[19] Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013. http://arxiv.org/abs/1301.3781,June 2016.[20] Vincent P, Larochelle H, Bengio Y, Manzagol P A. Extracting and composing robust features with denoising autoencoders. In Proc. the 25th International Conference on Machine Learning, June 2008, pp.1096-1103.[21] Socher R, Pennington J, Huang E H, Ng A Y, Manning C D. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proc. the Conference on Empirical Methods in Natural Language Processing, July 2011, pp.151-161.[22] Silberer C, Lapata M. Learning grounded meaning representations with autoencoders. InProc. the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014, pp.721-732.[23] Zhang H, Yu H, Xiong D, Liu Q. HHmm-based Chinese lexical analyzer ICTCLAS.In Proc. the 2nd SIGHAN Workshop on Chinese Language Processing, July 2003, pp.184-187.[24] Hu B, Lu Z, Li H, Chen Q. Convolutional neural network architectures for matching natural language sentences. In Proc. Advances in Neural Information Processing Systems, December 2015, pp.2042-2050. |