Starting off by answering some key questions

Useful if: you hear the word ‘evaluation’ and can’t describe it easily.

What is ‘Evaluation’?

  • “Evaluation is the systematic and evidence-based assessment of a policy or intervention to understand its design, implementation, outcomes, overall effectiveness, and value.”

    Let’s break down a few things from that quote:

    • ‘Systematic’: You need to plan it out carefully and throughout, not just do it at the end.

    • ‘evidence-based assessment’: You need gather data, not just have an opinion or view.

    • ‘outcomes’: See further down

    • ‘value’: This is more complicated than it seems. This [future link] blog post explores this more.

    In a nutshell, it is how you learn from your work. It helps you understand what is working, what isn’t, and why, so you can improve what you do.

    It is not about judgement or inspection. It is about learning and improving.

  • There are lots different types of evaluation (pilot evaluation, economic evaluation, impact evaluation, formative, summative…) the list is endless.

    However, I want to keep this simple. If you are getting started, and you want to understand your work, you will most likely wish to conduct an Impact Evaluation (IE) or Implementation and Process Evaluation (IPE).

    Impact Evaluation (IE): This focuses on what is different, as a result of the programme. Has it worked…?

    Implementation and Process Evaluation (IPE): This focuses on how the programme was delivered, and why something may or may not have worked.

    Both of these look at different things and in different ways, but for now just knowing the difference is an important step.

  • If you don’t stop to learn from your work, you risk repeating the same mistakes or missing what is actually making a difference.

    Worse, you risk doing harm (and not knowing about it) when you intended to do good.

    Evaluation helps you:

    • Improve your programmes

    • Use time and money more effectively

    • Understand who benefits, and who doesn’t

    • Make informed decisions about change

    In widening participation and social impact work, this matters because resources are limited and the needs are real.

    If you have limited resources, time, and/or money, then evaluation is more important than ever.

  • Myth: Evaluation is complex, time-consuming, and expensive.

    Reality: Evaluation can be simple, quick, and budget-friendly, using methods that align with your resources, from quick feedback loops to more formal studies. How complex and expensive it is, is really down to you.

    ——————

    Myth: Evaluation is mainly for accountability and proving success.

    Reality: Being accountable is one purpose, but far from the only one. Evaluation drives improvement, helps you learn about your work, helps you do it better, and is your tool for ongoing growth. The context of your work is changing, and this helps you keep up.

    ——————

    Myth: You need to be an expert to do it.

    Reality: You will already have valuable skills to do this. Evaluation starts with being curious about your own work, asking ‘why’ and ‘how’. Your existing problem-solving, project management skills, and ability to communicate ideas are all more important than understanding jargon.

    ——————

    Myth: There is one ‘right’ way to do evaluation.

    Reality: Two evaluation experts could come up with two completely different ways to evaluate something, and both work well. Ultimately, the best approach is context-specific, flexible, and adapts to the organisation and ongoing work.

    ——————

    Myth: Quantitative research (numbers, figures) is more scientific than qualitative research (words, meanings, experiences)

    Reality: These methods are different and give you different things. One is not better than the other. It’s just as easy to make numbers and figures say anything you like, as well as misinterpreting someone’s experiences. Both require care.

    ——————

    Myth: You do evaluation at the end.

    Reality: My number one pet peeve! You need to plan your evaluation from the very start of your work. If you think about it after the event, it’s too late. How can you measure change if it is too late to measure where you started?!

  • The OfS have a key resource for evaluation, laying out what they describe as their ‘standards of evidence’: OfS resources here.

    One of the biggest guides for doing evaluation in the Higher Education sector is TASO. Absolutely worth a look: TASO resources here.

    Advance HE have a knowledge hub that might be worth looking for, offering tools and examples of good practice: AdvanceHE resources here.

    A personal favourite for me is the Evaluation Collective. This is made up of staff across the HE sector who have to actually DO evaluation. So their knowledge is based on practical experience. Evaluation Collective resources here.

What is Theory of Change (ToC)?

Useful if: you’ve been asked for a ToC and don’t quite know what it’s meant to do.

  • A Theory of Change is a clear explanation of how and why your work is expected to lead to change.

    It links:

    What you do > what happens to the participants > the wider impact

    Some think it is just a diagram. It isn’t. It’s a tool to explore, reinforce, and explain your shared thoughts and ideas.

  • There are lots of different styles, templates, and layouts for a Theory of Change that you can find online (I’ll share some below), but no one way of doing it is ‘right’.

    They will use different terms or words to describe the same idea: describing the ultimate impact you want to have, and working out the steps to get there.

    ——————

    Some use specific terms and map change vertically like this:

    Long term Impact
    ^
    Short term Outcomes
    ^
    Activities


    Some use different terms and map change horizontally like this:

    Actions > Preconditions > Goal

    ——————

    Some are very detailed, which is great to explore change and really understand the way your programmes work.

    Some are very minimal, which is great to effectively communicate what you’re doing to a wider audience.

    The most important thing is that what you describe in your Theory of Change is backed up by evidence.

    As for the ‘type’, don’t worry too much. Find the one that works for your organisation and for you. It’s the way you use it that matters.

  • In various sectors, you’ll find organisations doing things because they assume it will work. But there has been little thought beyond an assumption.

    How can you explain why your work will make change happen without an evidenced theory behind the change.

    A Theory of Change helps you:

    • Be clear about your purpose

    • Agree assumptions within a team

    • Identify what to evaluate

    • Explain your work to others

    Without it, programmes can become a list of activities rather than a strategy for change.

  • Myth: Theory of Change is just a box-ticking exercise.

    Reality: If not done properly, it absolutely can be.

    When a Theory of Change is created quickly, with no time to think, it can feel like something you produce to satisfy someone else. Then it can sit in a folder and not used.

    When done properly, it can be transformative. The value of a Theory of Change is in the thinking it encourages. If it feels like box-ticking, the problem usually isn’t the tool, it’s that people haven’t been given the time or space to think properly. This particular argument is explored more in one of my blog posts.

    ——————

    Myth: Theory of Change has to be complicated to be strong.

    Reality: Many people assume that a ‘good’ Theory of Change must be detailed, technical, and full of specialist language.

    But clarity is not the same as simplicity in the sense of “basic”. A clear Theory of Change can still be robust, thoughtful, and evidence-informed, without being difficult to understand.

    If stakeholders can’t follow it, that’s not ‘rigour’. That’s a communication problem.

    ——————

    Myth: It’s just a diagram.

    Reality: The diagram is the output, and a visual representation of your thoughts, not the Theory of Change itself.

    The real value lies in the conversations, the thought process, and the evidence:

    • Why do we think this works?

    • What assumptions are we making?

    • What could get in the way?

    A simple visual can be powerful because it captures shared thinking in one place but the thinking always comes first.

    ——————

    Myth: Once it’s written, it’s done.

    Reality: A Theory of Change is not a contract. It’s a working model.

    As you learn more, through delivery, reflection, or evaluation, your Theory of Change should evolve. Over time, the context of your work changes too.

    Updating it is a sign of learning and understanding, not failure. It’s necessary.

    ——————

    Myth: We don’t have time for Theory of Change.

    Reality: I get it, and for many of you, it might be true. But it’s also the strongest argument to do one.

    Without clarity, teams risk spending time on activities that don’t contribute to change, or struggling to explain their work later.

    A simplified Theory of Change can save time by providing focus.

  • During my secondment at TASO, I led on a project to create an interactive tool to help you create and develop your theory of change.

    If you like the TASO Theory of Change approach, I’d encourage you to have a play!

    Access it here: https://toc-builder.taso.org.uk/

‘Outcomes’ and ‘Outputs’

Useful if: you’re not sure what ‘results’, ‘changes’, ‘targets’ or ‘KPIs’ you’re meant to be measuring.

  • Understanding the difference between outputs and outcomes is one of the most important (and sometimes most confusing) parts of evaluation.

    Put simply:

    • outputs describe what you do

    • outcomes describe what changes as a result

    Both matter, but they answer different questions.

    Outputs: the direct products of your work. They are usually things you can count or list.

    They tell you what was delivered, not whether it made a difference.

    Outcomes: the changes that happen because of your work.

    These changes might be:

    • in knowledge or understanding

    • in attitudes or confidence

    • in behaviour or decisions

    • in circumstances or opportunities

  • Outputs are often immediately obvious and something tangible that has happened through your work. For example:

    • Number of workshops delivered

    • Number of students attending

    • Number of mentoring sessions

    • Number of resources produced

    Outcomes are often not tangible, but the change as a result of everything you’ve done. For example:

    • Students feel more confident applying to university

    • Participants better understand their options

    • Young people feel a stronger sense of belonging

    • Learners are more likely to persist with study

  • Outputs help you:

    • Track delivery

    • Understand reach and scale

    • Manage projects and resources

    They are often what funders ask for first, because they are clear and measurable.

    Outputs do not tell you:

    • whether people benefited

    • whether learning happened

    • whether behaviour changed

    High activity does not automatically mean high impact.

    ——————

    Outcomes help you understand whether your work is making a difference.

    They allow you to:

    • judge effectiveness

    • improve programme design

    • focus on what really matters

    Outcomes are where learning happens.

  • Myth: “If we measure a lot of outputs we must be having impact”

    Reality: High activity can feel reassuring (lots of sessions delivered, lots of people reached) but delivery alone does not tell you whether change is happening.

    Outputs tell you how busy you were. Outcomes tell you whether it mattered.

    Both are useful, but they answer different questions.

    ——————

    Myth: “Outcomes are just ‘soft’ or subjective”

    Reality: Outcomes are sometimes dismissed as vague or unreliable because they often involve confidence, understanding, or attitudes.

    But these changes are often exactly what education and social programmes are designed to influence.

    What makes an outcome strong is not whether it is “hard” or “soft”, but whether it is clearly defined with good evidence.

    ——————

    Myth: “Outcomes only matter at the end of a programme”

    Reality: Outcomes are often treated as something to look at once everything is finished.

    But outcomes can be short-term, medium-term, or long-term. And noticing early changes can help you adjust and improve delivery while the work is still happening (like what we do in ‘developmental evaluation’)

    Evaluation is most useful when it supports learning during delivery, not just after it.

    ——————

    Myth: “Funders only care about outputs”

    Reality: On the face of it, it often seems like it doesn’t it? But, funders often ask for outputs, but many are increasingly interested in understanding:

    • what changed,

    • for whom,

    • and why.

    Often this is what they really are after.

    Being able to show outcomes clearly can strengthen relationships with funders and partners, and even bag you increased money, resource, and investment in your work.

  • For Higher Education (or even Further Education) Institutions, I’d take a look at TASO and their evidence toolkits for a starting point: https://taso.org.uk/evidence-toolkits/

    More widely, the OfS themselves have years of reporting on monitoring outcomes linked to Access and Participation Plans that give lots of good examples of what outcome measurement looks like: https://www.officeforstudents.org.uk/data-and-analysis/access-and-participation-plan-data/monitoring-data-and-outcomes-2020-21/

    Outside of education, gov.uk also have a nice little explainer and some resources, from DCMS. https://dcmslibraries.blog.gov.uk/2018/08/28/an-introduction-to-measuring-outcomes/

Useful if: you’re under pressure to ‘prove impact’ or ‘show something works’ but aren’t sure what that means.

What is
‘Impact’?

  • In social impact, education, and public services, impact refers to the longer-term difference your work contributes to.

    Perhaps think ‘the ultimate change that means your issue or problem no longer exists’.

    Impact is not just about what you deliver or even what changes immediately after. It is about how your work helps shift people’s lives, opportunities, or experiences over time.

    Importantly, impact is usually about contribution, not proof.

  • There are varying levels and types of impact, so let’s look at this as broadly as possible.

    ——————

    1. Individual impact: Changes experienced by individuals.

    Examples:

    • Improved standard of living

    • Increased progression into education or work

    2. Organisational impact: Changes within organisations or systems.

    Examples:

    • Improved practice based on learning

    • Systemic or structural change of an organisation

    3. Community or societal impact: Wider change beyond individuals.

    Examples:

    • Increased participation from under-represented groups

    • Reduced inequalities between groups

  • Defining the impact gives you a collective target to aim for. In a sense, this is your mission, the ultimate thing your programme of work is looking to achieve.

    Take some time to understand and define this as accurately as possible. All of the actions, outcomes, and data points to this.

    Fully understanding impact helps organisations:

    • Stay focused on purpose, not just activity

    • Make better decisions

    • Learn what really works

    • Be accountable to communities, funders, and partners

    In education and social justice work, impact matters because good intentions alone are not enough. Knowing whether work contributes to change is essential.

  • Myth: “We have to prove we caused the impact”

    Reality: In complex social systems, no single organisation works in isolation. Evaluation usually explores contribution to this. Your impact often happens later down the line and you will need evidence that your work is going to contribute to the impact.

    That’s where a good evaluation comes in, with a solid Theory of Change.

    ——————

    Myth: “Impact is only long-term and abstract”

    Reality: Often, it can be. But not exclusively. It really depends on the scope and size of your work.

    Impact can be long-term, but it can also be visible through patterns, progression, and cumulative change.

    ——————

    Myth: “Impact only matters to funders”

    Reality: Understanding impact helps teams improve practice and stay aligned with their values, not just meet external requirements.

    Your impact matters to all of your staff to understand the work. It matters to your clients or people you work with, because they need to be bought into the process. It matters to the public you interact with because they can understand why you are needed.

  • For a more detailed definition, UK Research and Innovation (UKRI) have resource to help: https://www.ukri.org/councils/esrc/impact-toolkit-for-economic-and-social-sciences/defining-impact/

    The Times Higher has a nice article on this from a Higher Education perspective: https://www.timeshighereducation.com/campus/defining-impact-shift-thinking-acting-and-being

    The UK government has a nice paper on this to explain this more holistically. https://assets.publishing.service.gov.uk/media/57a0896de5274a31e000009c/60899_Impact_Evaluation_Guide_0515.pdf

What is being ‘evidence-led’?

Useful if: you’re unconvinced or concerned about taking time away from ‘core delivery’.

  • This is increasingly becoming a more common phrase across the education sector.

    Being evidence-led means using information to inform decisions, improve practice, and learn from experience.

    It does not mean becoming academic, data-heavy, or ignoring the views or judgements of individual stall. Evidence supports decision-making it does not replace it.

  • There are lots of ways you can categorise data, so I’m not going to go into too much detail here. However there are a few you need to know, and in a simple way.

    1. Quantitative evidence: Numbers and figures.

    Examples:

    • Attendance figures

    • Retention rates

    • Survey responses

    2. Qualitative evidence: Descriptive data that explains experiences.

    Examples:

    • Interviews

    • Focus groups

    • Written reflections

    3. Research and literature: Findings from studies or evaluations.

    Examples:

    • Sector research

    • What Works evidence

    • Academic studies

  • If you can say that you, or your organisation are truly evidence-led, this gives you a real strength to your work.

    This is because if you and your organisation are evidence-led, then you are committed to:

    • Avoiding assumptions

    • Improving programmes over time

    • Adapting to what is actually happening

    • Using resources more effectively

    In education and social impact work, this helps ensure that effort leads to meaningful change.

  • Myth: “Evidence only means numbers.”

    Reality: Stories and experiences are evidence too.

    In fact, I’d go further. The strongest evidence needs to be put together into a narrative or a story. Evidence needs to be understood and felt for it to be impactful, and therefore you need numbers, figures, and you need detail and description to bring that to life.

    ——————

    Myth: “Everything needs to be measured.”

    Reality: Not necessarily always or all of the time. Good practice focuses on what is most useful to learn.

    You will almost certainly not have the time or resource to produce good evidence for everything. It should be thought through as to why you need it.

    ——————

    Myth: “Being evidence-led means ignoring anecdotal or lived experience.”

    Reality: This isn’t about erasing views, or silencing opinion. This is about supporting views, and strengthening your practice.

    The important thing however is understanding the strength of evidence, and being willing to change your opinion in the face of strong evidence.

    Ultimately, though, evidence strengthens judgement, it doesn’t replace it.

    ——————

    Myth: “If something worked elsewhere, it will work here.”

    Reality: Context matters. People matter. Your approach matters.

    Evidence needs interpretation, not just a copy-paste job. You need to spend the time understanding evidence in your own working context, thinking about your work from every angle.

    Having said all that, sharing evidence and learning from others is absolutely the right approach. So if something is working elsewhere, don’t ignore it! Understand why and use it!

  • The Education Endowment Foundation (EEF) have a great explainer page, with well thought out guides: https://educationendowmentfoundation.org.uk/education-evidence/more-resources-and-support/using-research-evidence

    The UK government have a good case study here about ‘evidence-based decision making’, which might be worth a read: https://www.gov.uk/government/publications/evidence-based-decision-making-framework-used-by-the-regulatory-horizons-council/evidence-based-decision-making-framework-used-by-the-regulatory-horizons-council

    And from an education context, UCL have an article I like that discusses ‘evidence-based’ and ‘research-informed’ teaching practices: https://blogs.ucl.ac.uk/ioe/2017/03/23/just-what-is-evidence-based-teaching-or-research-informed-teaching-or-inquiry-led-teaching/

Next
Next

Templates and Worksheets