We use cookies to improve your experience with our site.
Peng-Peng Chen, Hai-Long Sun, Yi-Li Fang, Jin-Peng Huai. Collusion-Proof Result Inference in Crowdsourcing[J]. Journal of Computer Science and Technology, 2018, 33(2): 351-365. DOI: 10.1007/s11390-018-1823-6
Citation: Peng-Peng Chen, Hai-Long Sun, Yi-Li Fang, Jin-Peng Huai. Collusion-Proof Result Inference in Crowdsourcing[J]. Journal of Computer Science and Technology, 2018, 33(2): 351-365. DOI: 10.1007/s11390-018-1823-6

Collusion-Proof Result Inference in Crowdsourcing

  • In traditional crowdsourcing, workers are expected to provide independent answers to tasks so as to ensure the diversity of answers. However, recent studies show that the crowd is not a collection of independent workers, but instead that workers communicate and collaborate with each other. To pursue more rewards with little effort, some workers may collude to provide repeated answers, which will damage the quality of the aggregated results. Nonetheless, there are few efforts considering the negative impact of collusion on result inference in crowdsourcing. In this paper, we are specially concerned with the Collusion-Proof result inference problem for general crowdsourcing tasks in public platforms. To that end, we design a metric, the worker performance change rate, to identify the colluded answers by computing the difference of the mean worker performance before and after removing the repeated answers. Then we incorporate the collusion detection result into existing result inference methods to guarantee the quality of the aggregated results even with the occurrence of collusion behaviors. With real-world and synthetic datasets, we conducted an extensive set of evaluations of our approach. The experimental results demonstrate the superiority of our approach in comparison with the state-of-the-art methods.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return