Support Hub

Survey sample sizes

How many public survey responses do I need?

How many public survey responses do I need?
You should aim to get enough responses to reflect a significant sample of your total audience or attendant size. Generating a statistically significant sample means you can be confident your survey results reflect the opinion of your wider audience. The number of responses needed to achieve a margin of error under 5% depends on the size of attendance numbers and the variation in opinion from survey respondents, which can be difficult to predict for different events and activities.

The more responses you obtain, the more likely you will be able to gain a significant sample. We generally suggest aiming for 100 responses, however any engagement with the public is still a great start. Evaluations with low response rates can still produce very useful data and engaging with audiences in this way can be beneficial, even with a small total sample.

Collecting a representative sample size

The Culture Counts platform enables you to calculate the ‘margin of error’ for your evaluation. This is what we use to gauge the strength of the data collected, rather than relying on capturing a specific fixed % response rate from your audience.

To calculate the margin of error, click on the properties tab for your evaluation – here is an example evaluation;

Scroll down to the bottom and enter in the overall attendance figure for the project. NOTE: this refers to the total attendance for the evaluation as a whole so you would need to calculate the total attendance figure if the evaluation contains surveys for multiple events

Once you’ve entered in the overall attendance, click the reporting tab and the margin of error should show under the variance tab.

The below is a screenshot from the Example Evaluation AU which you should also have in your dashboard. We have a total audience size of 28,000 and have surveyed 140 people. From your query below, If we go by the idea of needing a 10% response rate, we’d have to survey 2,800 people which may be unrealistic resourcing wise.

This is why we use Margin of Error instead. We would always aim for a margin of error below 5% which you see we have for the responses to the question capturing ‘Captivation’. This means that we can be 95% confident that if we surveyed the entire visitor population, the average outcome for ‘Captivation’ would fall within 4.3% of the average generated by the sample.

In the above example, the margin of error for the other dimensions are all above 5% which means more survey responses are needed to calculate an average. This doesn’t necessarilly mean you need 2,800 responses – somehwere around 350 could do it. Since the reporting dashboard updates live, you can always see how your margin of error is going in real time.

Some things to note;

  • This will only work for surveys that contain dimension questions, no margin of error is calculated for surveys that are entirely built on custom questions. You can google online ‘margin of error’ calculators to use if you need – take the results with a grain of salt as they often calculate their margin of error based on worse-case-scenario.
  • This calculation works best for large sample sizes which can generate an average easily compared to small audience size (workshops) that have say 50 respondents or less because in a smaller sample size, agreeance will be more varied.
  • The margin of error is only calculated for the evaluation as a whole – you won’t be able to view this ‘per survey’ as it is a tool used to gain a picture of your data overall.

If you have any further queries, please feel free to reach out to your Client Manager for further advice.

Was this helpful?