Crowdsourcing is increasingly important in scientific research. According to Google Scholar, the number of papers including the term "crowdsourcing" has grown from less than 1,000 papers per year pre-2008 to over 20,000 papers in 2016 (see the accompanying figure).
Crowdsourcing, including crowdsourced research, is not always conducted responsibly. Typically this results not from malice but from misunderstanding or desire to use funding efficiently. Crowdsourcing platforms are complex; clients may not fully understand how they work. Workers' relationships to crowdwork are diverse—as are their expectations about appropriate client behavior. Clients may be unaware of these expectations. Some platforms prime clients to expect cheap, "frictionless" completion of work without oversight, as if the platform were not an interface to human workers but a vast computer without living expenses. But researchers have learned that workers are happier and produce better work when clients pay well, respond to worker inquiries, and communicate with workers to improve task designs and quality control processes.6 Workers have varied but undervalued or unrecognized expertise and skills. Workers on Amazon's Mechanical Turk platform ("MTurk"), for example, are more educated than the average U.S. worker.2 Many advise clients on task design through worker forums. Workers' skills offer researchers an opportunity to shift perspective, treating workers not as interchangeable subjects but as sources of insight that can lead to better research. When clients do not understand that crowdsourcing work, including research, involves interacting through a complex, error-prone system with human workers with diverse needs, expectations, and skills, they may unintentionally underpay or mistreat workers.
No entries found