7 (now 8!) secrets to good Measurement, Evaluation and Learning (MEL) 

10 years ago, I’d helped to build enough measurement and evaluation systems to recognise the common pitfalls – and how these could be flipped to drive greater learning and impact. I put my thoughts together in a “7 Secrets of Good Monitoring & Evaluationpresentation, which went a little bit viral and still gets more than a thousand views per month.  

Want more than 7 secrets? Join our Complete Guide to MEL (Measurement, Evaluation and Learning) course to learn what you need to create a practical measurement framework and insightful evidence-based reports about progress and outcomes. 

A lot has changed in ten years but reviewing my presentation; I notice how much of what I spoke of then remains the same today. 

Let’s walk through the secrets that stand the test of time with a sprinkle of new ideas.

Before we dive into how to do the thing, let’s look at the “what” the thing is. What does “good” evaluation and impact measurement look like? 

One of my favourite definitions describes it as: 

The systematic collection of information to improve decision-making and enhance organisational learning, with the ultimate aim of bringing about programs that  better meet needs and lead to improvements in targeted social, economic  and environmental conditions. 

I like it because the emphasis is on learning that can lead to greater impact – it’s about more than having a good report or proving you’ve spent funding as you said you would.

In fact, that’s one of the reasons I no longer talk about Monitoring and Evaluation. Instead, we talk about Measurement, Evaluation and Learning (MEL), a helpful reminder that our aim is to measure actual impact in conditions that are holding problems in place as well as changes in people’s lives (not just monitoring ‘ouputs’). Also,  that capturing learnings to adapt and improve our interventions should be both a core driver and outcome of the process.  

So, now we’re clear on the what, let’s look at the how of good MEL. 

Secret number 1. Many – if not most – MEL initiatives fail 

Hands up if you’ve ever collected data that ended up collecting dust. We’ve all been there at some point.  

When I first started in the field, most MEL initiatives I saw failed because of two reasons. They either did not complete their data collection, or did not actually use the data for program improvement.  

While this is something that’s improving, there are still far too many organisations investing significant time and resources into data collection without investing the same efforts into reviewing and putting these insights into action. 

Organisations getting the most out of MEL are the ones that are embracing it as something that serves their learning needs first. Accountability is an added – and important – bonus. 

Secret number 2. Good MEL is deliberate

Good MEL is not something that can just happen – it needs proper scoping and planning. Even today, no matter the scale of the initiative I’m working on, I still follow the same process in terms of my MEL planning. It looks like this:

Secret number 3. Develop a shared story for the change you want to make, early on 

Your Theory of Change is the rationale driving your choice of intervention, a shared narrative describing the journey for how you can arrive at change.  

Creating or revisiting your Theory of Change as part of your MEL process builds ownership and clarity for everyone contributing to the initiative, as well as providing the chance to test assumptions, spot any gaps, and make adjustments to strengthen the design of the program. A good Theory of Change provides the groundwork for developing your Key Evaluation Questions and the structure for reporting against desired outcomes.  

Secret number 4. Get your Key Evaluation Questions right   

Your Key Evaluation Questions (KEQs) form the bedrock of good MEL as well as the basis of your data collection. Get them just right and you have set yourself up to get a good return for your efforts. 

They’re not the questions you’d actually ask in a survey – they’re the questions that will guide your entire MEL plan in terms of assessing whether your program or initiative was successful or not. They are the explicitly evaluative “so what?” questions in the diagram below, ones that require a value judgement and interpretation to be answered.

What happened? You had 500 farmers participate in an initiative.
So what? Was 500 farmers enough to achieve the results we expected? Was that good or bad?

And my top two tips for these questions? Don’t try to craft them until you have your audience, purpose and “evaluand” (the thing you’re evaluating) clear. And don’t have too many of them – four or five will suffice.

Secret number 5. Choose mixed methods to help tell your program’s story   

While there’s no magic formula to tell you which are the right tools for a specific evaluation, we can confidently say that a mix of stories and numbers is the secret sauce. Bringing in qualitative and quantitative data and richer descriptions will best help your key audiences understand and learn from your impact. 

Another handy hint? Don’t choose your method(s) first. Look at what you want to achieve (your MEL scope) and let that help guide your approach in terms of data collection. 

Secret number 6. Share the love with your team!   

Figuring out what impact you are making is part of everyone’s role – and everyone in our organisation or initiative needs to have ownership and understanding of it.  

I’m pleased to say this is something we’re seeing more of, with more organisations investing in building a learning culture. That means getting everyone to speak the same language and share a common purpose in improving program outcomes. And that’s only ever going to lead to better outcomes.  

Secret number 7. Start with the end in mind   

I borrowed from the art of Japanese management with this one: when it comes to reporting and sharing your findings – plan your MEL reports at the start of the initiative.   

  • What will the reports feature?  
  • What would be most useful?   
  • Will you have different reports and/or dashboards for different audiences?  

While the above questions remain relevant, the idea of a MEL report has evolved significantly (in some quarters at least). They need no longer be static 50-odd page documents. They can be dynamic real-time dashboards, and include participatory video stories and dynamic infographics. 

The key here is using the available tools to best communicate with your different audiences. They may not all need the same level of detail or method of delivery. And this needs to be considered and planned for at the start of your MEL work. 

Secret number 8. Always center equity in evaluation 

I’ve been thinking a lot about centering equity in evaluation, especially in light of hearing from First Nations peoples about how they are looking to decolonise evaluation, and what we can do better. 

At a practical level, this means deliberately making space to assess not only who has benefited from the program or initiative, but who has lost out. Who hasn’t had an adequate voice in the process? What are the impacts of this? We need to do more of this – and we need to do it for every evaluation. 

These secrets just scratch the surface of what it takes to do good MEL. If you’d like to know more, I’ve poured more of my experience into a 15-week course where we dive deep into MEL theory and principles, building your practical skills to design and deliver MEL.  

Join our Complete Guide to MEL (Measurement, Evaluation and Learning) course to learn what you need to create a practical measurement framework and insightful evidence-based reports about progress and outcomes.