“Should evaluators avoid conflict?”  Anthea posed this incisive question while presenting her case study, only to be retorted by Scott that there is a “difference in managing conflict as opposed to avoiding conflict”.  And off it went, an amazing discussion by the panel and the audience members on the relationship between evaluation, conflict management – and yes, understanding failures. The panel has a conservative estimate of over 200 years of evaluation experience – and that’s just counti...
What a refreshing talk (call to arms?) from Andy, particularly at 4pm on Day 2 of such a thought-inspiring conference!  Andy has thrown down the gauntlet to all of us to make sure that we remain relevant.  And Jess Dart, who convened the keynote, also taught us how to say Anthropocene, the human dynasty that is currently taking over the world and is severely affecting the geological timeline of environmental impacts. Fortunately, Andy has helped us navigate through this complex space.  He ha...
Just spent a stimulating three days in Canberra at the Collaborate for Impact conference.  It’s all about Collective Impact , and it was great to see how the model is evolving to be more structured and codified. There are now 75+ collective impact initiatives around Australia – quite a feat eh? I am most impressed their focus on really investing in the early stages of collaborative venture and creating the common agenda and holding environment - then stepping this out right through the itera...
Over the last year Clear Horizon has formed a close working partnership with SOLIDARITAS, a niche M&E consultancy based in Jakarta, Indonesia.  This partnership is built on a shared passion for tailoring meaningful M&E solutions for organisations across a range of social, environmental and aid effectiveness sectors. We recently partnered with SOLIDARITAS to build an organisational M&E system for the UNDP’s Pulse Lab data innovation project in Jakarta. After working together on two very well-...
This year Clear Horizon has been continuing to learn about the nexus between the growing practice of Design Thinking in social and environmental interventions with evaluative practice.  We’ve enjoyed pulling our own ideas as well as learning from others through the AES Special Interest Group on Design and Evaluation (the SIG). So far as part of the SIG we’ve held two webinars, one in which we set up a framework for understanding how evaluators (including Clear Horizon) are engaging with the ...
Since joining, Clear Horizon earlier this year, I’ve really enjoyed engaging with clients on different ways to elicit and present program outcomes.  An interesting and compelling way to collect program outcomes is through personal stories of change using the Most Significant Change (MSC) technique . Globally, there is increasing interest and application of the MSC technique from a range of organisations and funders, who recognise the value of narrative-based outcomes. MSC is a relatively ver...
Recently Clear Horizon ran a Masterclass entitled “Organisational outcomes and meaningful measures”. This Masterclass arose from of our own organisational strategic planning sessions where it became apparent we need to consolidate our knowledge in this space. We are increasingly requested to provide monitoring and evaluation (M&E) support at an organisational level, rather than at the project or program level. This has presented both opportunities and challenges, as organisational M&E has di...

This slide show provides an overview of the SIPSI methodology as presented at the AES 2016 conference. It also provides a case study of how it was used in DFAT Indonesia. 

SIPSI technique.pdf

I have been involved in evaluation since the early 2000’s and understand that it has been a discipline that has always understood the complexity of the world in which programs are designed and implemented. In many cases evaluation (and here I am talking about the discipline of evaluation) has not been able to fully respond to the complexity of the so-called “wicked problems” as well as it might. I should clarify that it was not always a shortcoming of ‘evaluation’. Rather it was often the re...
Recently we have noticed a groundswell of interest in how to evaluate policy influence; a notoriously slippery thing to catch hold of. This is in part due to the multitude of forces that drive policy change but also because change usually happens over a long period with many different actors and interests. In addition, external ‘environmental’ factors such as unforeseen events often change the agenda, creating opportunities and risks for policy makers. How do we untangle this morass? This is...