Maximize Your Survey Response Rates: Expert Insights from Sheila Robinson

Maximizing Survey Response Rate

In this podcast episode, James Pann, Ph.D., interviews Sheila Robinson, Ed.D., about the topic of surveys and response rates. We focus on the significance of response rates in surveys and the steps that can be taken to maximize them.

Sheila is a career educator and professional learning designer with experience in K-12 public education and higher education, a certified program evaluator, and Certified Presentation Specialist (CPS™) with particular interests in survey design, data visualization, and presentation design.

Why Sheila is Interested in Surveys

Sheila explains that her interest in surveys stems from her background in program evaluation work and her fascination with the art and science of asking questions. In addition, she has extensive experience in conducting surveys and understands the critical role that response rates play in the quality of the data collected.

Response Rates are Critical for Survey Research

She discusses the importance of response rates in surveys, as having a low response rate can lead to low confidence in the data collected. This can result in biased data and incorrect conclusions, which can have significant negative consequences on a study. The desired response rate is not a set number and varies based on the population being surveyed.

Best Practices for Improving Response Rates

Sheila talks about best practices for creating a survey instrument and participant engagement, including:

  • Providing the respondent with context and purpose for the survey- It’s crucial to communicate the reason for conducting the survey and how the results will be used. Respondents are more likely to participate when they understand the importance of the survey.
  • Being transparent about the time it will take to complete- Respondents are more likely to complete a survey when they have a clear understanding of how long it will take. Overly long surveys are a significant barrier to participation.
  • Using reminders to keep participants engaged- Reminders can help keep respondents engaged and increase the likelihood of completion. It’s important to find the right balance between reminding respondents without becoming a nuisance.
  • Finding appropriate incentives- Offering incentives, such as monetary rewards or recognition, can encourage participation. It’s essential to find an incentive that is meaningful to the population being surveyed and not coercive.

Impact of the Length and Complexity of Surveys on Response Rate

Sheila also discusses the importance of the length and complexity of surveys regarding response rate, noting that the length of the survey tends to be inversely related to the response rate.

However, it’s important to balance the length of the survey with the need to gather sufficient data. There are ways to increase the response rate by building survey time into program activities when possible. This can help make the survey experience more convenient and increase the likelihood of participation.

Role of Incentives in Survey Participation

Sheila emphasizes the importance of finding the right incentive for the group. Different populations may respond differently to various incentives, and it’s crucial to understand what motivates the group being surveyed. Offering incentives that are not relevant to the population can be a waste of resources and may even reduce participation.

Pilot Testing to Improve Response Rates

When possible, Sheila suggests piloting a survey with a sample of participants to predict the response rate and make necessary adjustments to increase it. This can help identify potential challenges that inform edits to the survey instrument that can be made before fully rolling it out to the entire target population. Pilot testing can also provide valuable insight into the motivations and preferences of the population being surveyed, which can inform the design of incentives and reminders.

Timeline

00:40 Why Sheila is interested in survey research
03:50 Why survey response rate is critical
05:15 The optimal response rate
07:37 Ensuring accurate demographic representation of the sample
09:30 How to improve response rate
12:01 The survey invitation message is critical
13:26 Transparency in the length of time to complete the survey is important
15:19 Survey reminders can be used tactically to improve the response rate
16:43 Incentives should be used carefully
20:46 Timing of incentives and the principle of reciprocity
23:08 Building survey completion time into program activities can increase response rate
26:17 Importance of piloting the survey prior to use
28:15 Who sends the survey is important
33:20 How to reach Sheila

Check out my previous interview with Sheila about using design thinking in survey design and use.

Enjoy!

Share

James Pann smiling at the camera, sitting in front of green trees

James Pann, Ph.D. is a Professor at Nova Southeast University and a highly experienced psychologist and evaluator with nearly 25 years of experience. He conducts research and evaluation projects with non-profit organizations in the fields of health, human services, and education, and has received funding from multiple government agencies.

James holds multiple degrees including a Ph.D. in Counseling Psychology, an M.S. Ed. from the University of Miami, and a BBA in Accounting from the University of Texas at Austin. He is also the host of the EvalNetwork podcast, a frequent conference presenter, and has published several peer-reviewed research articles and co-authored a book. James currently resides in Miami, Florida with his family and enjoys backpacking trips. Find out more about his work here.

Related

Mindfulness Meets Evaluation: Insights from Jim McDavid

Mindfulness Meets Evaluation: Insights from Jim McDavid

In this episode, I talk with Jim McDavid, Ph.D., about his experience with mindfulness and meditation practice, how it has influenced him, and how it affects how he views and practices evaluation. Our conversation also covers practical wisdom, Jim’s interest in the environment, and challenges associated with determining cause and effect in evaluation. Jim is

Read More »
The CIPP Evaluation Framework podcast episode with Guili Zhang

The CIPP Evaluation Framework with Guili Zhang

James Pann interviews Guili Zhang about the Context, Input, Process, and Product (CIPP) evaluation model and other evaluation related areas. Dr. Zhang is Department Chair and Professor of Research and Evaluation at East Carolina University. She received a Ph.D. in Research and Evaluation Methodology from the University of Florida and postdoctoral advanced training in large

Read More »