How Peer Benchmarking Works
Methodology behind MATpulse's similar-schools comparison, aligned with DfE FBIT peer selection.
What peer benchmarking does
Peer benchmarking compares your school against 30 statistically similar schools across key financial, attendance, and performance metrics. It shows where your school sits in the distribution — not whether you are performing well or badly, but how your metrics compare to schools serving similar communities. This is context for your own decision-making, not a judgement.
How similar schools are selected
MATpulse mirrors the DfE's Financial Benchmarking and Insights Tool (FBIT) methodology for selecting comparator schools. Schools are first grouped by phase (primary or secondary) and region. Within each group, the 30 nearest neighbours are identified using weighted Euclidean distance across three variables: number of pupils (weight 1.0), percentage of pupils eligible for free school meals (weight 1.5), and percentage of pupils with SEN (weight 1.0). FSM is weighted more heavily because deprivation is the strongest predictor of both cost structure and pupil outcomes. If a phase-region group has fewer than 30 schools, the algorithm falls back to phase-only matching.
Why these variables
The DfE's FBIT uses school phase, region, number of pupils, FSM%, and SEN% for its running-cost comparators. The DfE's Attendance Similar Schools model uses a similar approach with FSM%, SEN%, IDACI scores, phase, and school size, selected via a gradient boosted decision tree. MATpulse uses the FBIT set because it is transparent, well-documented, and widely understood by school business managers and governors. We do not use prior attainment (like FFT Aspire's contextual value-added model) because peer benchmarking is descriptive, not predictive — it shows where you sit among similar schools, not whether you are above or below expectation.
What the box-and-whisker plot shows
For each metric, the visualisation shows the distribution of values across your 30 peer schools. The grey box represents the interquartile range (IQR) — the middle 50% of peers (25th to 75th percentile). The vertical line inside the box marks the peer median. The whiskers extend to the minimum and maximum values in the peer group. The coloured dot shows where your school sits. If the dot is inside the box, your school is in the middle range. If it is outside, your value is in the top or bottom quartile among peers.
Statistical detail
Below each chart, MATpulse shows: your school's value, the peer median, the peer mean, and the standard deviation. The median is the middle value (less sensitive to outliers than the mean). The standard deviation tells you how tightly clustered the peer group is — a small SD means peers are similar; a large SD means there is wide variation among schools that look similar on paper. The percentile rank tells you what proportion of peers have a lower value than yours.
What this does not tell you
Peer benchmarking shows where you sit relative to similar schools. It does not tell you whether your spending is too high or too low — that depends on your school's specific circumstances, SEND provision, staffing model, building condition, and strategic priorities. A school investing heavily in specialist SEND support will legitimately have higher staff costs than peers. A school with a large sixth form will have different cost structures from a school without one. The benchmarks provide context for conversations, not answers.
DfE sources
FBIT comparator methodology: financial-benchmarking-and-insights-tool.education.gov.uk. DfE Attendance Similar Schools algorithmic transparency record: gov.uk/algorithmic-transparency-records. IDSR methodology: gov.uk/guidance/school-inspection-data-summary-report-idsr-guide. The peer selection algorithm and all thresholds are documented in MATpulse's open-source codebase.