The Power of Retrospective Pretests to Address Common Survey Research Challenges

Retrospective pretest designs

James Pann interviews Melanie Hwalek, Ph.D., a program evaluation consultant, to discuss the retrospective pretest (RPT) design, focusing on its practical applications and the findings from her recent research detailed in the paper, “Designing a Questionnaire with Retrospective PrePost Items: Format Matters.” RPT is particularly useful for evaluating changes in participants’ perceptions or self-assessments following interventions such as workshops or training sessions. It can be used to address common survey research challenges encountered by evaluation consultants and researchers.

Historical Background and Evolution of RPT

Melanie traces the origins of RPT back to 1947, when it was first used to evaluate training impacts on soldiers’ attitudes. She highlights the significant milestones in RPT’s development, including its discussion in Campbell and Stanley’s seminal 1963 book on quasi-experimental designs, which solidified its methodological relevance.

Advantages of Retrospective Pretest Surveys

Practicality: Melanie emphasizes RPT’s practicality, particularly where pretesting is unfeasible, or participants are unaware of their intervention until it happens. This method consolidates data collection at a single point, thereby simplifying logistical challenges and reducing potential biases associated with traditional pre/post-testing methods.

Reduction of Response Shift Bias: A significant advantage of RPT is its ability to mitigate response shift bias. This occurs when participants’ understanding of the measured concept changes due to the intervention. For example, after a training session, participants might realize they knew less than they initially thought. RPT asks participants to reassess their prior state of knowledge or attitudes postintervention, leading to potentially more accurate change measurements. This can prevent misleading outcomes like the boomerang effect, where participants report decreased knowledge or skills postintervention—not because the intervention failed, but because their enhanced understanding reveals a previous overestimation of their capabilities.

Disadvantages and Limitations

Despite its benefits, RPT has limitations, including reliance on autobiographical memory, which can be unreliable over long periods. It may also be unsuitable for children or certain interventions where defining a clear ‘before’ state is challenging.

Insights from Dr. Hwalek’s Study on Retrospective Pretest Layouts

Melanie’s recent study, as detailed in her paper, investigated the impact of different RPT questionnaire layouts on data quality. The study involved 1,941 caregivers participating in training workshops, comparing six layouts to see which minimized errors like inattentiveness and the boomerang effect.

Key Findings:

  • Best Layout: Layout 1 was found to be the most effective. It placed questions in the center, with ‘before’ responses on the left and ‘now’ responses on the right. This layout significantly reduced inattentiveness and minimized the boomerang effect, indicating that it helped participants better understand and respond accurately to the survey.
  • Implications for Evaluators: These findings underscore the need for careful consideration of survey design in RPTs to enhance data reliability and validity.


The interview with Dr. Hwalek provides comprehensive insights into the retrospective pretest design, reinforcing its utility in evaluating the impact of interventions and assisting program evaluation consultants. Understanding both the methodological strengths and considerations required for its application allows evaluators to use RPT more effectively to produce reliable and insightful outcomes in program evaluation.


00:00  Introduction
0:55  The genesis of Melanie’s interest in RPT
02:54  Understanding response shift bias
04:51 What is RPT?
05:15  Advantages and disadvantages of Using RPT
08:14  RPT study question layouts/formats
11:28 Overview of the findings of the Melanie’s study
12:19 Measuring inattentiveness in surveys
16:36 Detailed explanation of response shift bias
17:30 Thoughts on RPTs
19:57 Implications of the study for evaluation and research practice
36:42 Different ways to measure inattentiveness in surveys
37:30 Connecting with Melanie

If you like this interview, check out my other interviews with evaluation leaders such as Michael Quinn Patton, David Fetterman, and more. I also have two great interviews with Sheila Robinson about increasing survey response rates and survey development using a design thinking approach.

Please reach out with comments and questions. Thanks!


James Pann smiling at the camera, sitting in front of green trees

James Pann, Ph.D. is a Professor at Nova Southeast University and a highly experienced psychologist and evaluator with nearly 25 years of experience. He conducts research and evaluation projects with non-profit organizations in the fields of health, human services, and education, and has received funding from multiple government agencies.

James holds multiple degrees including a Ph.D. in Counseling Psychology, an M.S. Ed. from the University of Miami, and a BBA in Accounting from the University of Texas at Austin. He is also the host of the EvalNetwork podcast, a frequent conference presenter, and has published several peer-reviewed research articles and co-authored a book. James currently resides in Miami, Florida with his family and enjoys backpacking trips. Find out more about his work here.


Empowerment Evaluation

Empowering Change: David Fetterman on Using Evaluation to Build a Better World

David Fetterman is a leading expert in empowerment evaluation, an approach that emphasizes collaboration, participation, and capacity building. He has written extensively on the topic, and his work has been used in a wide range of settings, including government agencies, non-profit organizations, and businesses. David’s work focuses on helping people evaluate their programs and initiatives

Read More »
Mindfulness Meets Evaluation: Insights from Jim McDavid

Mindfulness Meets Evaluation: Insights from Jim McDavid

In this episode, I talk with Jim McDavid, Ph.D., about his experience with mindfulness and meditation practice, how it has influenced him, and how it affects how he views and practices evaluation. Our conversation also covers practical wisdom, Jim’s interest in the environment, and challenges associated with determining cause and effect in evaluation. Jim is

Read More »