What is it about?

This study examines if the online crowdsourcing platform Amazon’s Mechanical Turk (MTurk) is reliable for studying risk preferences. Only 43% of 1202 MTurk participants answered our test consistently, which is much lower than in lab studies. Those who answered inconsistently spent less time and had different risk preferences. Certain demographics also affected the level of consistency. Our findings caution against relying solely on MTurk and recommend larger participant samples for more reliable results.

Featured Image

Why is it important?

This study is important because it questions the reliability of using Amazon's Mechanical Turk (MTurk) for studying risk preferences. The findings reveal a significant inconsistency in participant responses on MTurk compared to traditional laboratory studies. The study emphasizes the need for caution when using MTurk as a research platform, highlighting potential limitations in participant reliability. The recommendation for larger sample sizes underscores the importance of obtaining more robust and dependable results in studies exploring risk preferences and potentially other psychological phenomena.

Perspectives

I appreciate the study's contribution to research methodology awareness. It might support the emphasis on caution and larger sample sizes, recognizing the importance of reliable results in economic and psychological research. This study could prompt discussions on improving research practices within universities and potentially influence guidelines for using platforms like MTurk.

Martin Altenburger
Webster Vienna Private University

Read the Original

This page is a summary of: The Validity of Amazon’s Mechanical Turk in Assessing Risk Preferences - A Research Note, SSRN Electronic Journal, January 2024, Elsevier,
DOI: 10.2139/ssrn.4660601.
You can read the full text:

Read

Contributors

The following have contributed to this page