Surveys are a powerful tool for gathering insights, but small missteps in their design can lead to skewed results, disengaged respondents, or even unusable data.
Understanding these pitfalls—and how to avoid them—can make all the difference in ensuring a smooth survey experience. Client Manager, Shelley Timms outlines five most common mistakes and how to avoid them with practical examples.
Inconsistent wording between surveys
Inconsistent wording makes it difficult to aggregate responses across different surveys or track trends year-over-year. Even minor wording changes can influence how respondents interpret and answer the question, potentially skewing your data.
🚫 Don’t | ✅ Do |
---|---|
Create surveys from scratch each time | Use a template so the format and wording remains consistent |
Have unclear communication during the survey development stage | Assign a lead to the evaluation project to oversee consistency and proofing |
Modify multiple-choice or dropdown options inconsistently | Use pre-defined Question Bank questions or cross reference where possible |
Over-using required questions
Mandatory questions in a survey can frustrate respondents, leading to higher dropout rates and lower data quality. While some required questions are necessary for logic-based flows, forcing respondents to answer every question—especially personal ones—can result in survey fatigue and discomfort.
🚫 Don’t | ✅ Do |
---|---|
Make demographic questions like age, gender, and economic impact mandatory | Only require questions that are essential for logic or sampling accuracy |
Assume required questions improve data quality | Recognise that forcing responses can lead to inaccurate or rushed answers |
Force respondents to provide feedback with mandatory questions | Encourage flexibility and the option to provide constructive feedback |
Double-barrelled questions
Questions that ask about two different issues simultaneously make it unclear which part the respondent is addressing. This can confuse the respondent and make it difficult to report the results accurately.
🚫 Don’t | ✅ Do |
Use double-barrelled questions such as “How satisfied are you with our product quality and customer support?” | Separate into two questions: “How satisfied are you with our product quality?” and “How satisfied are you with our customer support?” |
Assume respondents can provide a meaningful answer to a question covering multiple issues | Recognise that splitting questions improves clarity, data reliability and comparisons |
Using complex language
Using overly technical language, jargon, or complex phrasing can alienate respondents and make surveys harder to understand. Accessible language ensures that all respondents, regardless of background or literacy level, can participate and provide meaningful responses.
🚫 Don’t | ✅ Do |
---|---|
Use language such as “It contributed to an enhanced sense of social cohesion within the community.” | Use simple, relatable phrasing: “It helped me feel connected to people in the community.” |
Use industry jargon or complex terminology | Keep language clear and accessible to all respondents |
Assume all respondents have the same level of understanding or literacy | Ensure questions are inclusive by following accessible language practices |
Skipping the survey review step
Skipping the testing phase before distributing a survey can result in undetected issues like typos, broken logic, or unclear questions. A poorly tested survey can frustrate respondents and compromise data quality.
🚫 Don’t | ✅ Do |
---|---|
Send out a survey without testing it first | Preview and test the survey internally before launch |
Assume there are no errors in the logic or wording | Have someone else review the survey for clarity and functionality |
Forget to use the survey platform’s test mode | Use built-in preview tools to check for logic errors, typos, and technical issues |
Are you interested in evaluating with Culture Counts? Contact us here to book a demo.