Caution when Crowdsourcing: Prolific as a Superior Platform Compared with MTurk
Main Article Content
Abstract
Many researchers host surveys on online crowdsourcing platforms, such as Amazon’s Mechanical Turk (MTurk) and Prolific. Online platforms promise a convenient way to meet sample size needs while drawing on diverse pools that might not otherwise participate in science. Yet, the quality of data obtained from these platforms is often questionable, so the collection must be closely monitored and reviewed. This study aimed to independently determine which crowdsourcing pool best serves researchers who plan to recruit for online surveys. To achieve this aim, we analyzed data from a recently completed study that drew participants from both MTurk and Prolific. We screened the collected data for both cost and quality, focusing on measures of attention, duration, and internal consistency. We found that only 9.89% of MTurk participants (N = 354) and 43.34% of Prolific participants (N = 345) produced high-quality data; Prolific also proved to be the more affordable option. Researchers considering these platforms for recruitment may weigh the evidence to make decisions when developing their own recruitment strategies. Finally, we highlight best practices for social scientists conducting online research, including additional survey and screening techniques.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.