The 9 Most Common Measurement, Evaluation and Learning Questions – Answered

Properly planned for and used, your impact measurement system and the thinking that comes with it can be one of your most powerful tools for driving and amplifying change.

My Career in MEL – Part 1

Thinking about a new career? You’re not alone according to research, which shows during COVID around a third of us have seriously considered alternate employment. And one thing that’s keeping people in their current roles? Feeling engaged with their work. We recently spoke to evaluation experts Zazie Tolmer and Dr Jess Dart to get their insights on what skills they started with, what they’ve needed to build, and what you should be considering for a career in the changing MEL landscape.

Challenges in MEL: Valuing Women’s Voices and Experiences

I won’t lie – it’s been difficult to feel positive about the gains we’ve made towards a more equal society this International Women’s Day. There are the regressive effects of the global pandemic, which have seen women disproportionately lose paid work while their childcare and home responsibilities have sky-rocketed as schools locked-down. And in Australia, we’ve also spent a dispiriting few weeks watching our political leaders systemically fail to listen to and value the experiences of women.

So I’m taking this opportunity to practice something we learned to do in 2020 – taking time out to reflect on the positives and the things we can control – how we can ensure that in our world of MEL (Measurement, Evaluation and Learning), the voices and experiences of women are being heard and acted upon.

Using Shawna Wakefield and Daniela Koerppen’s excellent 2017 paper as a guide, Applying Feminist Principles to Program Monitoring, Evaluation, Accountability and Learning, I can proudly say that our approach is fundamentally feminist.

As a matter of course, Clear Horizon promotes genuine participation, uses participatory techniques and fosters co-ownership throughout the MEL process. We recognise that evaluation is political, and that it has the power to influence significant change. As MEL practitioners, we need to be aware of the both the impact and biases we can bring to our MEL work and the broader social impact space. According to Wakefield and Koerppen, a feminist approach involves “self-awareness and potential biases of the professionals and institutions involved; looking at the importance of trust, time and resources to develop both the [MEL] processes and the capacities required to undertake them, and last but not least, accountability and continuous learning.”

For us, this means interrogating the gendered nature of our MEL framing, the way we do our questioning, analysis and reporting, and being on the lookout to ensure that any structural inequities of gender imbalances are not further entrenched by the act of evaluation. A good example of applying feminist thinking in MEL is when we are developing indictors for our measurement systems. As evaluators, we need to be aware of the choices and decisions that we make, considering how these are “inevitably shaped by the gender and power dynamics in a given context”.

For example, often when reporting on educational outcomes, the default is to look at measures such as the proportion of students who successfully completed learning. Which might paint a very different picture than if we were to look at the proportion of girls versus the proportion of boys successfully completing their learning. Or if we were to look at the types of subjects completed by gender, and whether the typecasts of STEM  subjects for boys and “softer” topics for girls are still being played out with very real impacts on future career choices and outcomes. Using measures disaggregated by gender, for example, can provide deeper insights into the differing experiences and opportunities afforded to girls and women.

I was also immensely pleased to see one of our flagship techniques, Most Significant Change (MSC), recognised as a methodology with the potential to challenge inequalities and influence gender relations when supported by appropriate MEL systems. Clear Horizon often draws on MSC as a qualitative evaluation tool, largely due to its participatory, inclusive and empowering approach, ensuring the voices of the most disadvantaged can be given equal weight and providing space to interrogate any programmatic changes realised beyond the knowledge or lived experience of those developing the metrics of impact. It also values listening and story-telling qualities, and offers the ability to influence change through learning.

And while the above gives me hope in terms of our practices – while not perfect – being a good model for listening to and valuing the experiences of women, we’ve still got a lot a work to do to support feminist MEL approaches in the broader social impact space. Just as well we’re up for the challenge.


Part 2 – Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

In this article, we ask Dr Jess Dart, Anna Powell and Dr Ellise Barkley about their top tips for overcoming some of the key challenges of evaluating systems change and place-based approaches, and how to get everyone on the same songsheet.

The Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

A conversation with Dr Jess Dart, Anna Powell and Dr Ellise Barkley about the challenges and opportunities presented by systems change and place-based approaches, and why evaluators in the space are truly the jazz players of the evaluation world.

Developing a Theory of Change – is there a ‘right’ way?

Considered by many change-makers as the ultimate ‘road map’ when it comes to creating social impact, a good Theory of Change is an integral part of any intervention or initiative aiming to affect change. We asked experts from different fields about the approaches they use and uncovered some significant differences, which begged the question – is there a right way to develop a theory of change?

Logan Together – “Game-Changer” for Place-Based Evaluation

Clear Horizon was commissioned by the Commonwealth and Queensland Governments to lead the co-design of the Place-Based Evaluation Framework. The framework was tested with Logan Together, a place-based initiative aiming to shift the dial on developmental outcomes for local children aged 0-8.

While evaluation practice of place-based approaches (including collective impact initiatives) has been developing over the last 5-10 years in Australia, until now there has not been a consolidated guide for practice in Australia. Evaluating place-based approaches is challenging due to the emergent and complex nature of the work, and they often include embedded ‘systems change’ efforts which are difficult to measure.

Enter Clear Horizon. We worked with over 150 stakeholders to lead the co-design the Place-based Evaluation Framework, in collaboration with delivery partners The Australian Centre for Social Innovation (TACSI), Collaboration for Impact (CFI), and Community Services Industry Alliance (CSIA). The Place-based Evaluation Framework is available through the Department of Social Services website, and has a toolkit of over 80 tools useful for evaluating systems change and place-based approaches.

To test the framework, we worked in close partnership with Logan Together as proof of concept, using the Place-based Evaluation Framework to create a measurement, evaluation and learning strategy for the collective impact initiative.

While Logan Together had many enablers for success set up – with an experienced backbone team, committed funders and partners and an engaged community – there was no agreed plan for how to measure success in the short to medium term, or framework for determining whether or not they were on track.

Recognising there is no ‘one-size-fits-all’ evaluation methodology, we developed clear and phase-appropriate evaluation systems and measures of success, incorporating learning and community voice into the evaluative process.

Our approach ensured we captured the complexity of the work, while still providing a practical roadmap to keep partners and community on track. And it’s setting the standard for place-based and collective impact evaluation across the country.

“The framework is a game changer for how government (is) approaching and evaluating place-based initiatives.

It has created some much-needed standards and guidance for community change movements looking to better understand their progress and set shared expectations with their partners and funders.”

Matthew Cox, Director, Logan Together


DFAT Evaluation of Investment Monitoring Systems

At Clear Horizon, we have been grappling with how to effectively – and efficiently – improve the monitoring, evaluation and learning of programmes.  Over the many years of experience, and across the range of programmes and partners we work with, one thing remains abundantly clear: the quality of the monitoring is the cornerstone for effective evaluation, learning and programme effectiveness.  In the international development sector, which has some quite large investments that operate in extremely complex environments, monitoring remains even more important.

At the end of 2017, Byron’s new year’s resolution for 2018 was to “dial M for monitoring”, and to put even more emphasis on improved monitoring systems.  Having conducted stocktakes of MEL systems across a range of aid portfolios, and being involved in implementing or quality assuring over 60 Department of Foreign Affairs and Trade aid investments, really clear messages about what works and what doesn’t have emerged.  This culminated in the presentations at the 2018 Australian Aid Conference and 2018 Australian Evaluation Conference, where Byron and Damien presented on how to improve learning and adaptation in complex programmes by using rigorous evidence generated from the monitoring and evaluation systems.

So we at Clear Horizon welcome the findings and recommendations in DFAT’s Office of Development Effectiveness Evaluation of DFAT Investment Monitoring Systems 2018. Firstly, we welcome the emphasis on improved monitoring systems for investments – it is essential to improving aid effectiveness.  Secondly, we strongly agree that higher quality MEL systems are outcome focused, have strong quality assurance of data and evidence, and where the data services multiple purposes (i.e. accountability, improvement, knowledge generation). Thirdly, that partners and stakeholders that have a culture of performance oversight and improvement are essential – this needs to continue to be fostered both internally and externally.

To achieve this, as recommended, it is essential that technical advice and support is provided to programme teams, investment managers, and decision makers.  This need not be resource intensive, and must be able to demonstrate its own value for money.  However, what is extremely important in this recommendation is that the advice is coherent, consistent and context specific.  Too often we see a dependency on the programme team providing a singular generalist M&E person required to provide a gamut of advice – covering a range of monitoring approaches, evaluation approaches, different sectors, and sometimes even different countries.  Good independent advice often requires a range of people providing input on different aspects of monitoring, evaluation and learning – a reason at Clear Horizon why we have a panel of MEL specialists, with some focusing on evaluation capacity building, others on conducting independent evaluations, or those building MEL systems.

Standardising expectations and advice across aid portfolios of what constitutes good monitoring, evaluation and learning that is fit for purpose is essential for all of us.  We have been fortunate enough to be involved with developing different models of providing third party embedded design, monitoring and evaluation advice.  The ‘Quality and Improvement System Support’ approach provides consistent technical advice across entire aid portfolios, such as what has been developed for Indonesia; ‘Monitoring and Evaluation House’ in Timor Leste in partnership with GHD is based on a neutral broker approach to improving the use of evidence in programme performance; and the ‘Monitoring and Evaluation Technical Advisory Role’ in Myanmar places a stronger emphasis on supporting programme teams through technical and management support.

This report echoes our belief that more monitoring and evaluation is not necessarily the answer, but rather collaborating to do it better and breeding a culture of performance is ultimately what we are striving for.