Research Article
Wallah: Design and Evaluation of a Task-centric Mobile-based Crowdsourcing Platform
@INPROCEEDINGS{10.4108/icst.mobiquitous.2014.258030, author={Abhishek Kumar and Kuldeep Yadav and Suhas Dev and Shailesh Vaya and G. Michael Youngblood}, title={Wallah: Design and Evaluation of a Task-centric Mobile-based Crowdsourcing Platform}, proceedings={11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services}, publisher={ICST}, proceedings_a={MOBIQUITOUS}, year={2014}, month={11}, keywords={crowdsourcing mobile-crowdsourcing smartphones task-centric applications}, doi={10.4108/icst.mobiquitous.2014.258030} }
- Abhishek Kumar
Kuldeep Yadav
Suhas Dev
Shailesh Vaya
G. Michael Youngblood
Year: 2014
Wallah: Design and Evaluation of a Task-centric Mobile-based Crowdsourcing Platform
MOBIQUITOUS
ICST
DOI: 10.4108/icst.mobiquitous.2014.258030
Abstract
Crowdsourcing through web technologies has emerged as a key method and tool for conducting distributed work. There are new platforms constantly emerging that aim to provide crowdsourcing opportunities on mobile phones. However, most of these systems are very specific to certain types of tasks and do not address various mobile resource constraints experienced in developing countries such as India.
We propose and design a new platform Wallah that tries to address these limitations with a broad vision to make crowdsourcing opportunities pervasively available and feasible to do (i.e., at all times, locations, and with minimum investment in infrastructure from the crowd-workers). It supports task-centric applications to minimize the impact of screen size and facilitates caching of crowdsourcing tasks to deal with network limitations. Wallah supports both physical as well as virtual crowdsourcing tasks. The current version of Wallah implements an end-to-end platform for Android devices and includes five different task-centric applications for different categories of crowdsourcing tasks (human OCR, image tagging, language translation, audio transcription, and video tagging) developed by us. We evaluated the system with a 2 week pilot deployment among 59 crowd-workers where over 16000 tasks were performed. We analyzed the platform usage in detail and present descriptive statistics related to task completion time and task accuracy rates with other analysis such as the impact of screen size on task completion time and accuracy. We also conducted a post study survey to get participant's qualitative feedback and their perceived difficulties of different crowdsourcing tasks.