? Collusion-Proof Result Inference in Crowdsourcing
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | FAQ
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2018, Vol. 33 Issue (2) :351-365    DOI: 10.1007/s11390-018-1823-6
Data Management and Data Mining Current Issue | Archive | Adv Search << Previous Articles | Next Articles >>
Collusion-Proof Result Inference in Crowdsourcing
Peng-Peng Chen1,2, Student Member, CCF, ACM, Hai-Long Sun1,2*, Member, CCF, ACM, IEEE, Yi-Li Fang1,2*, Member, CCF, ACM, Jin-Peng Huai1,2, Fellow, CCF, Member, ACM, IEEE
1 State Key Laboratory of Software Development Environment, School of Computer Science and Engineering Beihang University, Beijing 100191, China;
2 Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing 100191, China

Abstract
Reference
Related Articles
Download: [PDF 735KB]     Export: BibTeX or EndNote (RIS)  
Abstract In traditional crowdsourcing, workers are expected to provide independent answers to tasks so as to ensure the diversity of answers. However, recent studies show that the crowd is not a collection of independent workers, but instead that workers communicate and collaborate with each other. To pursue more rewards with little effort, some workers may collude to provide repeated answers, which will damage the quality of the aggregated results. Nonetheless, there are few efforts considering the negative impact of collusion on result inference in crowdsourcing. In this paper, we are specially concerned with the Collusion-Proof result inference problem for general crowdsourcing tasks in public platforms. To that end, we design a metric, the worker performance change rate, to identify the colluded answers by computing the difference of the mean worker performance before and after removing the repeated answers. Then we incorporate the collusion detection result into existing result inference methods to guarantee the quality of the aggregated results even with the occurrence of collusion behaviors. With real-world and synthetic datasets, we conducted an extensive set of evaluations of our approach. The experimental results demonstrate the superiority of our approach in comparison with the state-of-the-art methods.
Articles by authors
Keywordscrowdsourcing   quality control   collusion   collaborative crowdsourcing   result inference     
Received 2017-04-17;
Fund:

This work was supported partly by the National Basic Research 973 Program of China under Grant Nos. 2015CB358700 and 2014CB340304, the National Natural Science Foundation of China under Grant No. 61421003, and the Open Fund of the State Key Laboratory of Software Development Environment under Grant No. SKLSDE-2017ZX-14.

Corresponding Authors: Hai-Long Sun     Email: sunhl@buaa.edu.cn
About author: Peng-Peng Chen is a Ph.D. student in the School of Computer Science and Engineering, Beihang University, Beijing. His research interests mainly include crowd computing/crowdsourcing, and social computing. He is a student member of CCF and ACM
Cite this article:   
Peng-Peng Chen, Hai-Long Sun, Yi-Li Fang, Jin-Peng Huai.Collusion-Proof Result Inference in Crowdsourcing[J]  Journal of Computer Science and Technology, 2018,V33(2): 351-365
URL:  
http://jcst.ict.ac.cn:8080/jcst/EN/10.1007/s11390-018-1823-6
Copyright 2010 by Journal of Computer Science and Technology