They spent two years finding out something ‘didn't work’. That’s great news.

Last week, I was lucky enough to be in the room at the swanky TASO offices in London for a research launch that I think deserves lots of attention. Not just because of what’s in it, but because of what it represents.

Two research reports were presented, but I suspect the one focused on using learning analytics data for wellbeing support will get the headlines.

This is because on the surface, the headline sounds disappointing.

A two-year study. Three universities. Randomised controlled trials. And the result?

The intervention didn't work.

I want to explain why I think that is one of the most valuable things to happen in Higher Education evaluation in yonks.

Panel discussion: stage l-r, Dr Eliza Kozman (TASO), Prof Michael Sanders (Kings), Prof Deborah Johnston (South Bank), Prof Sandeep Ranote (NHS), Dr Gareth Hughes (Student Minds)

First, context

TASO (The Centre for Transforming Access and Student Outcomes in Higher Education) commissions and publishes independent evidence to help universities improve outcomes for students. Particularly those from underrepresented backgrounds. They're well known in the sector for being the lead organisation in understanding this stuff, and they’re also made up of very talented people.

One of the studies they published focused on using learning analytics.

This is essentially the systems that universities use to track student behaviour digitally. We have one of these at Nottingham Trent, and it shows things like ‘are students logging into the library system?’ or ‘are they submitting work on time?’ or ‘are they clocking into uni buildings’. Universities have been collecting this data for years, and many have started using it to identify students who might be struggling.

The idea behind this research is what has got senior leaders at universities across the country excited for the last few years. Could you use that data to spot students with poor wellbeing, and then send them a targeted message (like an email or a push notification) to nudge them towards support services?

If you can do this, it’s the magic bullet we’ve all been searching for.

Two years. Three universities. Rigorous trial design.

The nudges didn't work.

A ‘null result’ and why does it matter?

Here's the important bit.

A null result means “we looked really carefully, and we couldn't find evidence of a statistically significant difference with our treatment group”.

It doesn't mean the research was wasted.

It doesn't mean the researchers did something wrong. It means the thing being tested (which in this case, were data-triggered emails and notifications to students) didn't produce the change it was designed to produce.

In a lot of cases, when this happens in evaluation, it quietly disappears.

The study doesn't get published. The organisation moves on. Nobody talks about it. In a disappointingly massive number of cases, the results are literally ignored and people keep doing the thing that didn’t work.

TASO published it anyway.

That is genuinely rare. And genuinely brave. Commissioning a two-year, multi-site randomised controlled trial is a significant investment of time, money, and organisational credibility. Publishing the result when it comes back negative takes courage, because it opens you up to exactly the kind of lazy headline that says "they spent all that money and found nothing” or “using data doesn’t work, scrap everything”.

But that framing misunderstands what evaluation is for. And in this case, completely ignores the nuance and learning that was discussed in the room and throughout the day.

The question now is, what do you do with it?

A null result isn't nothing. It's information.

Imagine this. You're a GP, and you suspect a patient might have a particular condition. You run the test. It comes back negative. That's not a waste of the test. That negative result is telling you something important… it's not that.

Now you know where to look next.

The TASO study tells us several genuinely useful things. Here are a few things discussed throughout the day:

1)       Learning analytics can't reliably identify students with poor wellbeing.

A student who stops logging into the library portal might be struggling, and the learning analytics platform is clocking their ‘low engagement’.

Or this ‘low engaged student’ has a strategy of working from their local café, borrowing books from their mates, or have downloaded everything they need. They’re buzzing on coffee and smashing it, without stepping foot on campus for weeks.

The data alone can't tell you which.

That's important. Because a lot of universities have been building systems and processes on exactly that assumption.

2)       The data isn't useless.

The excellent Carly Foster, one of the researchers on the project, made a point in the room that really stuck with me. Whilst the analytics platform couldn't give you a blanket way of identifying students with poor wellbeing, it did surface a smaller group (just a couple of hundred students out of tens of thousands) who had both low engagement and poor wellbeing.

That's a different, more specific picture. And that group is prime for a particular type of targeted, relational support.

Learning analytics, in other words, isn't a catch-all lens for wellbeing. But it might be one lens, used carefully, alongside others. That’s great stuff. It’s not “ignore these results” or “bin everything”, it’s a meaningful action and next step for what we could explore with research.

3)       The emails didn't work (on their own).

For me, one of the key debates to come out of the day was around the nudge itself. Specifically, the emails.

On the whole, emails sent automatically in response to analytics data did not produce measurable change in student engagement or uptake of support services.

Some said that means emails don’t work, and we need to try other mediums. I don’t agree.

The story is more complicated than that.

There is evidence in this research, and elsewhere, that how the email is written (the subject line, who it appeared to come from, the tone and content of the message, the length) changes the extent to which students respond to the email. It can engage them… but in the short term.

We’ve seen some of this already at NTU when we’ve been testing different email responses. Research is going to be published soon on this, but in short, some versions got more opens, more clicks, more initial interaction.

The problem is, if you get a student to open an email but there isn't a strong enough follow-up (a person, a conversation, a real offer of support) that initial wave of engagement just crashes and disappears.

You've caught their attention… and then dropped it.

Which tells us something important: the email isn't the intervention. The email is, at best, a door. What matters is what's behind the door.

So what does work?

Obligatory cheesy selfie from TASO’s office. Why not.

The second report published alongside the first points pretty clearly in a direction most of us already suspected, but now have stronger evidence for.

Relationships.

Wellbeing support activities that build genuine connections with staff and peers, fill a gap that data and nudges simply can't.

That's not a surprising finding. But it matters, because it pushes back against a drift in HE towards thinking that if we can just get the data systems right, the rest will follow.

It won't. At least, not on its own.

Linking interventions.

The thing for me is that each of these two reports are looking at different parts of the puzzle. The ‘using learning analytics for wellbeing’ bit I suspect didn’t work because a) engagement ≠ wellbeing, and b) the ‘communication’ is not an intervention. The ‘wellbeing interventions with small cohorts’ paper did show evidence, because it focused on the intervention, but actually didn’t go much into the ‘initial communication’ part.

They’re looking at different parts of the process.

You can’t drop a large scale nudge programme in isolation, and expect long-term sustainable results, and you can’t scale up a small but intense programme of support for a student. But maybe there is a way to link the two together to get results.

Of course, all this takes time, resource, and more research. And it isn’t simple.

So, sorry senior leaders at universities… no magic bullet.

Why I think this deserves your attention

For me personally, it isn’t the detail in the paper that has got my attention in all this. It’s the fact that they published this in the way they did.

I work in evaluation. I know how hard it is to publish results that don't go the way you hoped.

The pressure from funders, from institutions, from the sector, is almost always in the direction of positive stories.

"Here's what worked." "Here's the impact." "Here's the change."

TASO made a different call. They ran the trial, found the null result, and published it. Importantly, at their event it was with nuance, with context, and with a genuine attempt to say: here is what we now know, and here is where to look next.

That is what rigorous, honest evaluation looks like. It doesn't always look like success. But it always moves us forward.

And that is what the sector needs more of.

The challenge is, will others follow? Let’s hope we see more of this when the Higher Education Evaluation Library (HEEL) launches later this year.

TASO’s research was published on 12 March 2026.
Read both reports on the TASO website now (links below)


Research report 1: https://taso.org.uk/libraryitem/report-improving-student-wellbeing-using-analytics/

Research report 2: https://taso.org.uk/libraryitem/report-evaluating-wellbeing-interventions-with-small-cohorts/

Next
Next

The Romans drove on the left. And we didn’t have to make a survey to find out.