All the quant tools
you need.

Free calculators for market and social researchers.

Margin of Error

How precise is your survey result? Get the range within which the true answer likely falls.

Available

Significance Tester

Is the difference between two results real, or just noise? Test any two percentages.

Available

Batch Significance Test

Paste coded data from two survey waves and instantly flag which changes are statistically significant.

Available

Effective Sample Size

Weighted data isn't worth its full n. Find out what your weighted sample is actually worth statistically.

Available

More tools coming soon

Crosstab builder, subgroup sample size calculator, and more.

← Back

Margin of Error

Calculate how precise your survey result is

Survey details
Sample size (n)
i
The number of people who completed your survey or are included in this result.
Result (%)
i
The percentage you're reporting — e.g. enter 45 if 45% agreed. Leave at 50 if unknown; this gives the widest (most cautious) margin.
Confidence level
i
How sure you want to be that the true value falls inside the margin. 95% is the standard for most research reports.
Population size
i
Total population you're surveying. Leave at 0 if large or unknown — it rarely changes results above ~10,000.
Your margin of error
±0.00%
Lower bound
Upper bound
Pop. correction
What does this mean? The margin of error is the range within which the true answer probably sits. If you report 45% with a ±3% margin, the real figure is likely between 42% and 48%. Bigger samples give smaller margins — but halving the margin requires roughly four times as many respondents.
Formulae used
Margin of errorMoE = z × √( p(1−p) / n )
With finite population correctionMoE = z × √( p(1−p) / n ) × √( (N−n) / (N−1) )
Where z = confidence level z-score (1.96 at 95%), p = proportion (0.5 for worst case), n = sample size, N = population size
Calculate Target Sample Size

Enter your total population and desired precision to find out how many completed surveys you should aim for.

Target inputs
Population size
i
The total number of people in the group you want to survey.
Confidence level
i
95% is recommended and standard for most research.
Desired margin of error
i
How precise you want your results to be. ±5% is the most common threshold.
±0.5% ±15%
±5.0%
i
Recommended when surveying more than 5% of a known population. FPC reduces the required sample size.
Target sample size
Without FPC
With FPC
% of population
Formulae used
Base sample sizen₀ = z² × 0.25 / MoE²
With FPC adjustmentn = ( n₀ × N ) / ( n₀ + N − 1 )
Uses p = 0.5 (worst case). z = confidence z-score, N = population size.
Reference: Sample Size & MoE

Common sample sizes and their MoE at 95% confidence, assuming 50/50 split (worst case).

Completed surveysMoE at 50%MoE at 90%Rating
50±13.9%±8.3%Poor
100±9.8%±5.9%Poor
150±8.0%±4.8%Weak
200±6.9%±4.2%Weak
300±5.7%±3.4%Acceptable
400±4.9%±2.9%Acceptable
500±4.4%±2.6%Good
750±3.6%±2.1%Good
1,000±3.1%±1.9%Great
1,500±2.5%±1.5%Great
2,000±2.2%±1.3%Great
3,000±1.8%±1.1%Great
← Back

Significance Tester

Is the difference between two results real, or just chance?

Group A
Result (%)
i
The percentage result for Group A — e.g. enter 55 if 55% agreed.
Sample size (n)
i
How many people were in Group A.
vs
Group B
Result (%)
i
The percentage result for Group B — e.g. enter 48 if 48% agreed.
Sample size (n)
i
How many people were in Group B.
Test settings
Confidence level
i
95% is standard in research.
Test type
i
Two-tailed checks for a difference in either direction (recommended). One-tailed tests a specific direction.
Result
Difference
Z-score
p-value
Std error
What does this mean? A significant result means the difference is unlikely to be due to chance. It doesn't mean the difference is large or important — always consider the size of the gap too. With very large samples, even tiny differences can become statistically significant.
Formulae used
Pooled standard errorSE = √( p̂(1−p̂) × (1/n₁ + 1/n₂) )
Z-scorez = | p₁ − p₂ | / SE
p-value (two-tailed)p = 2 × (1 − Φ(z))
Where = pooled proportion, Φ = standard normal CDF.
← Back

Batch Significance Test

Paste coded question data from two waves and flag every significant change at once

Sample sizes
This wave (n)
i
Total number of respondents in your most recent wave.
Previous wave (n)
i
Total number of respondents in the previous wave.
Confidence level
i
95% is standard. A higher level means fewer differences will be flagged significant.
Paste your data
i
Select three columns in Excel — code label, this wave %, previous wave % — and paste directly.
Code labelThis wave %Previous wave %

💡 Select three columns in Excel (code, this wave %, prev wave %) and paste. Tab-separated values handled automatically.

Results
CodeThis wavePrev waveChangeSignificant?
How to read this table: Each row is one code. ▲ Sig. increase and ▼ Sig. decrease mean the change is unlikely to be sampling variation. Rows marked are not significant.
Formulae used
Pooled proportionp̂ = (p₁n₁ + p₂n₂) / (n₁ + n₂)
Standard errorSE = √( p̂(1−p̂) × (1/n₁ + 1/n₂) )
Z-scorez = | p₁ − p₂ | / SE
Two-tailed test. Same pooled z-test as the single significance tester.
← Back

Effective Sample Size

Find out what your weighted sample is actually worth statistically

Why this matters: Weighting adjusts your data to better reflect the population, but it comes at a cost — it reduces statistical power. A weighted sample of 1,000 might behave like only 700 unweighted interviews. This tool tells you the true effective n.
Paste your weight column
i
Copy the weight variable column from SPSS, Excel or Q. One weight value per row. Typically weights centre around 1.0.

💡 Copy your weight variable column from SPSS or Excel and paste directly. Column headers ignored automatically.

Effective sample size
Actual n
Weight efficiency
Design effect
MoE (effective n)
MoE (nominal n)
Formulae used
Effective sample sizeESS = (Σwᵢ)² / Σwᵢ²
Design effectDEFF = n / ESS
Weight efficiencyefficiency = ESS / n × 100%
MoE at 95%MoE = 1.96 × √(0.25 / ESS)
Where wᵢ = individual weight. MoE uses p=0.5 (worst case).