Choice, effort and reliability in online surveys

Choice, effort and reliability in online surveys

Mason Shihab, Emily Wen - May 20th, 2025

Does giving respondents control improve surveys? This study tests the impact on response quality and satisfaction - and it is part of our ongoing Methodology matters series - offering insights for future research.

Summary and Takeaways

This study explores whether giving respondents some control over their survey experience impacts the quality of their responses and their satisfaction with the survey experience. We conducted a preregistered experiment to test the impact of survey participant experience on survey response reliability. While the experiment reported here didn’t find strong support for our hypothesized effects, the results suggest ways forward for future research regarding the survey experience.

Introduction and Background

Academics have long been interested in how people react to choices. Sharot et al. (2009) provide behavioral and neurological results finding that participants have more positive feelings about the alternatives chosen after selecting them (and more negative opinions about options not selected). This suggests that our preferences may form after a selection is made, to help justify that selection. Leotti and Delgado’s findings (2011) suggest that individuals seek and prefer opportunities to express some choice, and more recently, Romero Verdugo et al. (2022) reported that participants were more curious about the results of lottery outcomes when they had a hand in selecting them. Lastly, Peters et al. (2024) explore the role of “the pleasure of choice,” or the presence or absence of choice in a decision-making context. They find that, when presented with the option to choose between multiple vaccines, both unvaccinated and vaccinated participants were more likely to get vaccinated. Taken together, there is strong evidence to predict that presenting participants with a choice in a survey will not only improve the way they feel about it, but also a greater likelihood of seeing that survey through to the end. Positive feelings may also increase the amount of cognitive effort expended, leading to a higher reliability between people’s answers. This final point underpins the opportunity in combining a choice treatment with an effort one.

Effort has also been studied thoroughly. In a very important paper, Economist Daniel Kahneman and coauthors studied what they called the “endowment effect,” which suggests that people place more value on things they own compared to equivalent items that they do not own (1990). This effect may be understood in terms of effort justification, which describes our tendency to value outcomes in relation to the effort we have invested in the processes leading to those outcomes (Festinger, 1957). Almost 70 years ago, Aronson and Mills (1959) conducted an experiment in which participants underwent different initiation processes to become a member of a group: those who underwent a “severe” initiation viewed the group more favorably than those who went through a mild initiation and those who did not undergo any initiation. It seems as though we justify our efforts by placing higher value on the outcome or product of a more “effortful” task (Kim & Song, 2020; Kruger et al., 2004). Does this same principle apply in survey research? Could increasing the effort required to complete a survey could increase the emotional investment participants have while taking it? Further, combining this with the choice effect (above) may create an additional interaction effect, such that the two concepts work together to maximize participants’ desire to complete a survey.

View data collection methodology statement

Research Hypotheses

H1A: Participants who are presented with a choice about their survey experience ahead of taking that survey will exhibit a higher (post choice) item response rate compared to those who were not given a choice.

H1B: Participants who have a more effortful survey-taking experience will exhibit a higher item response rate (for those more effortful items) compared with those in the less effortful condition.

H2A: Participants who are presented with a choice about their survey experience ahead of taking that survey will respond with greater internal consistency as measured by Cronbach’s alpha compared to those who were not given a choice.

H2B: Participants who have a more effortful survey-taking experience will respond with greater internal consistency as measured by Cronbach’s alpha compared with those in the less effortful condition.

H3A: Participants who are presented with a choice about their survey experience ahead of taking that survey will successfully complete a mid-survey attention check at a higher rate compared with those who were not given a choice.

H3B: Participants who have a more effortful survey-taking experience will successfully complete a mid-survey attention check at a higher rate compared with those in the less effortful condition.

Exploratory Hypotheses

HA: Participants who are presented with a choice about their survey experience ahead of taking the survey will report greater satisfaction with taking the survey compared to those who were not given a choice.

HB: Participants who have a more effortful survey-taking experience will report lesser satisfaction with taking the survey compared with those in the less effortful condition.

Procedure and Data

We ran our 2x2 between-subjects experiment twice on YouGov’s digital survey platform between July 25 and August 6, 2024, with a total sample size of n = 2,556. Every participant completed the 6-item “Very Efficient” Need for Cognition (NFC) scale, developed by Coelho et al. (2018), via YouGov’s online survey platform. This short-form scale, adapted from Cacioppo and Petty (1982), was chosen for its brevity and its ability to reveal consistent or inconsistent patterns in participants’ responses. The items assess participants’ desire to think about challenging or complex topics. Each participant was randomly assigned to one of three treatment conditions or to a control group:

  • Treatment 1 (Choice): Participants in the Choice group were able to choose a variation on a UX design element of the survey platform by adding a progress indication to inform them of how many questions remained in the survey.
  • Treatment 2 (Effort): Participants in the Effort group were required to “drag-and-drop” their answer choices, rather than single-select via a radio button.
  • Treatment 3 (Choice + Effort): Participants in the Choice + Effort group were able to choose a variation on a UX design element of the survey platform by adding a progress indication and were required to “drag-and-drop” their answer choices.

More on the data

The final analytic sample included 2,556 participants recruited via YouGov’s online survey platform and randomly assigned to one of four experimental conditions in a 2×2 factorial design crossing Choice (Yes/No) and Effort (Yes/No). All participants completed a version of the Need for Cognition (NFC) scale, which included six core items and one embedded attention check. In the Effort condition, participants answered using a drag-and-drop interface, while those in the No Effort condition used standard radio buttons. Participants in the Choice condition were given the option to enable a visual progress indicator at the top of the screen; among the 849 participants who were given this choice, 82% (n = 695) opted to display the indicator.

Survey compliance was high across the sample: 99% of participants completed all seven items without skipping. Individual item response rates ranged from 99.1% to 99.6%, indicating strong engagement. Performance on the attention check was slightly lower, with 91% of participants selecting the correct response. Item-level timing data were also recorded, allowing us to examine time spent on each question. On average, participants in the Effort condition spent substantially more time on the survey than those in the No Effort condition (M = 125 seconds vs. 89 seconds, respectively).

Completion and attention check rates across all groups are as follows:

  • Effort – Choice: 99.5% completed all items; 92.4% passed the attention check
  • Effort – No Choice: 98.8% completion; 90.3% passed
  • No Effort – Choice: 99.1% completion; 91.2% passed
  • No Effort – No Choice: 98.5% completion; 91.8% passed

Findings

Item Response Rate (H1)

We hypothesized that participants who were given a choice about their survey experience (H1A) and those in the more effortful condition (H1B) would exhibit higher item response rates. Due to the high overall completion rate, we modeled response as a binary outcome (complete vs. incomplete) of the NFC items using logistic regression.

The model revealed that neither predictor reached significance, though both trended in the hypothesized direction. Participants in the choice condition were more likely to complete all survey items compared to those without choice (β = 0.7, SE = 0.5, z = 1.4, p = .16). Similarly, participants in the effortful condition showed a higher completion rate, though with approximately half the measured effect (β = 0.4, SE = 0.4, z = 0.9, p = .35). To contextualize these results, in order for the effect of choice to reach significance, we would need an additional 2,344 responses with the same pattern of data for a total of n=4,909. For the effect of effort, we would need 8,717 responses for a total of n=11,282.

Internal Consistency (H2)

There was some differentiation between the groups in Cronbach’s alpha. The highest reliability was observed in the No Effort – Choice group (α = .81), followed by No Effort – No Choice (α = .79), Effort – No Choice (α = .77), and Effort – Choice (α = .74). These results suggest that providing participants with a choice may improve internal consistency, but only in the context of a low-effort task. When effort was high, the benefit of choice appeared to diminish. However, these results should be taken with caution, as the 95% bootstrapped confidence intervals for each condition all had some overlap, indicating a possibility that cannot yet be rejected such that all groups are actually the same.

These results may suggest that presenting survey-takers with a choice, alongside a lower-effort interface, may optimize reliability, whereas combining choice with a more demanding task may impair consistency. However, we did not find support for our hypotheses H2A and H2B: having both a choice and an effortful interface did not increase internal consistency, nor do we find evidence that these effects achieve this result individually. As seen in the plot below, both intervals contain zero and are thus consistent with there being no true effect. However, there is perhaps a directional hint that choice could be found to increase internal consistency in a future, perhaps larger study.

Chart Choice, Effort and Reliability in Online Surveys image

Attention Check: (H3)

We did not find any evidence that attention check performance differed significantly between conditions.

Survey Satisfaction: (EH)

We did not find any evidence that survey satisfaction differed significantly across conditions.

Discussion and Future Research

While our studies do not offer clear significant results in favor of our hypotheses, they still hint at the possible presence of the effects we hoped to measure. It seems that these effects may be much more subtle, requiring much larger sample sizes to catch. This was seen by the fact that larger, though certainly attainable, sample sizes would have been needed for our hypotheses regarding both treatments’ impact on survey completion rate to achieve statistical significance.

In addition, future studies should test for the interaction between the effects of Choice and Effort, as combining them seemed to have the opposite effect regarding internal consistency. Future studies could also utilize versions of the NFC scale with additional questions. This could make it easier to detect changes in internal consistency or cause other effects to appear in factor analyses.

The impact of choice and the presence of a progress indicator on passing an attention check also yielded promise of future successful research. These effects were difficult to disentangle from each other in this study, but future ones should be designed to understand whether giving participants a choice or displaying a progress indicator can help improve survey reliability overall.

About the authors

Mason Shihab is a Senior Marketing Data Analyst at the Vanguard Group in Malvern, Pennsylvania. He is a graduate of the Master of Behavioral and Decision Sciences program from the University of Pennsylvania and is affiliated with the Cognitive Affective Influences on Decision Making (CAIDe) Lab at The Ohio State University. His interests include financial decision making, machine learning, and consumer marketing methodologies.

Emily Wen is a Product Research Manager at Highmark Health, where she conducts research to understand the performance and impact of healthcare solutions. She is a graduate of the Master of Behavioral and Decision Sciences program from the University of Pennsylvania. Her interests include social psychology, integrating social science research with digital product development, and experience research methodologies.

About Methodology matters

Methodology matters is a series of research experiments focused on survey experience and survey measurement. The series aims to contribute to the academic and professional understanding of the online survey experience and promote best practices among researchers. Academic researchers are invited to submit their own research design and hypotheses.