Managing Crowdsourcing Workers

  • Panagiotis Ipeirotis
  • Foster Provost
  • Jing Wang
The emergence of online crowdsourcing services such as Amazon Mechanical Turk, presents us huge opportunities to distribute micro-tasks at an unprecedented rate and scale.  Unfortunately, the high verification cost and the unstable employment relationship give rise to opportunistic behaviors of workers, which in turn exposes the requesters to quality risks.  Currently, most requesters rely on redundancy to identify the correct answers.  However, existing techniques cannot separate the true (unrecoverable) error rates from the (recoverable) biases that some workers exhibit, which would lead to incorrect assessment of worker quality.  Furthermore, massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions.
In this paper, we present an algorithm that can easily separate the true error rates from the biases.  Also, we describe how to seamlessly integrate the existence of “gold” data for learning the quality of workers.  Next, we bring up an approach for actively testing worker quality in order to quickly identify spammers or malicious workers.  Finally, we present experimental results to demonstrate the performance of our proposed algorithm.