There are many contexts in which evaluation approaches need to deal with complexity – in systems change, place-based approaches, social innovation and design, to name a few. On this page, you’ll find tools Clear Horizon finds useful (listed below according to their purpose). Click on each tool for a brief overview and link to a PDF resource.

Place-based Evaluation Framework 

Narrative techniques for capturing emergent changes

Data gathering techniques

Tools to get to the ‘so what’

Tools specifically for testing developmental prototypes/early pilots

Tools for assessing contribution

Guidance on ethics, privacy and safety

Developed by Dr Rick Davies and co-authored by Dr Jess Dart from Clear Horizon, MSC is a form of participatory monitoring andevaluation. MSC is a qualitative method that involves collecting stories from stakeholders and potential beneficiaries of an initiative in order to generate and analyse data about significant changes and impact using a structured participatory process.

It is participatory because many project stakeholders are involvedboth in deciding the sorts of change to be recorded and in analysing the data.It is a form of monitoring because it occurs throughout the program cycle andprovides information to help people manage the program. It contributes toevaluation because it provides data on impact and outcomes, which are usefulfor assessing the performance of a program or initiative as a whole.

Click here for a summary of the MSC process and handy templates. For additional resources, click on the link: MSC

Developed by Clear Horizon’s Dr Jess Dart, this tool is a story-based approach, drawing upon some of the work Clear Horizon has been doing in international development. It is a mash-up of MSC (Most Significant Change) and outcomes harvesting. It can work well with an impact log, which is how you record potential instances for developing into a SIPSI narrative.

In human-centred design for social innovation or place-based approach (PBA), SIPSI can be used to capture and understand some of the less tangible impacts associated with the influence of an initiative on policy/systems, transformation change, and the glue that sticks systems together.

Click here for more details on conducting SIPSI, including a suggested reporting format, and a ranking rubric.

Episode studies involve “tracking back” from a policy change to generate understanding about the multitude of forces, events, documents and decisions involved in producing that change (Overseas Development Institute: A guide to monitoring and evaluating policy influence, 2011). It involves producing a narrative about what has led to the policy change before assessing the relative role or contribution of an organisation or intervention in this process. The key difference between episode and case studies is that, in an episode study, the starting point is the policy change as opposed to the intervention.

Click here for further details on how to conduct an episode study and suggested format for reporting.


An impact log is a practical way to collect informal and anecdotal evidence, and record examples of influence as part of ongoing monitoring. Developed by the Overseas Development Institute (ODI), it can be used for a variety of purposes: for example, as an ‘uptake log’, using an email inbox or database to collect evidence from the field staff; or a ‘media tracking log’, compiling quotes, newspaper cuttings etc. to track how specific issues are covered in the media. Key features of this tool are that they act like search mechanisms, using the eyes and ears of a larger group of people to record signals of impact. These logs are easy to administer, and the data collected can then be collated and coded into themes.


The technique was developed by ODI and more description can be found in ODI: A guide to monitoring and evaluating policy influence, 2011.

Where accessible and appropriate, mobile devices, apps and platforms can be practical tools for ‘real-time’ monitoring, recording and sharing, and adaptive management, as well as improving resource efficiencies. They present opportunities for programs and services that are implemented remotely or across a large area. For example, apps or online tools can be used in place of traditional paper-based survey techniques, and digital visualisations or graphic displays can generate interest and potentially encourage use. A word of advice: pick and choose the digital method that’s fit for purpose and suitable for the context, as well as user friendly from the perspective of the user and the person developing the content.


Click here for details on the advantages and challenges of using technology, its purpose in evaluation, and further resources.

A semi-structured interview is an informally guided process where only some questions are pre-determined and new questions are developed from discussion. A loose interview guide is used to ensure that the same topics are covered with each respondent. The exact wording of questions is not necessarily predetermined. These kinds of interviews are used to understand a key stakeholder’s impressions or experiences and gauge whether a program is having an influence. Interviews can be conducted in person or remotely by telephone or video conference.


Click here for details on materials and timeframe, tips for conducting these interviews, ethical considerations, and templates.

When running events and training sessions, it is important to get feedback from attendees to assist with the project’s continuous improvement, as well as provide project staff with important data to use in interim and final reports. Useful insights can also be gleaned by observing participants, ensuring the process is clear and transparent to the facilitator and participants.

Click here to access templates for gathering participant feedback and recording observations.

As a participatory, utilisation-focused approach to monitoring, evaluation and learning (MEL), a reflection workshop is designed by Clear Horizon to facilitate dialogue among project staff (and often donors) about context, activities, outcomes, impacts, crosscutting issues and management responses. Building on the process of evaluation summit workshops, it engages relevant stakeholders to make sense of MEL data and reach an agreement on evidence, findings and recommendations.

Click here for more details on how to conduct these workshops, practical tips, and the theoretical background.

Developmental testing, a process often used by designers, involves rapid cycles of iterative prototyping. In social innovation, this means testing ideas and prototypes to gather evidence for improvement, and feed learnings back into the design and development of a project, initiative or program. The tools listed below are some of the ways to integrate feedback loops into developmental evaluation, as well as help evaluate the project at a later stage: 

  • Program logic 
  • Negative program theory
  • Non-participant observation
  • Mini-experiments
  • EvalC3
Click here to learn more about each method.

The 'what else' test is a basic guide for non-evaluators to strengthen their contribution claims. It helps project staff think what else might have caused the change or result other than or as well as their work. This is important when it is not possible (or necessary) to prove that a project, initiative or program, on its own, caused an outcome.

Click here for background information and the steps for conducting this simple test.

This paper proposes a framework to inform the conduct of place-based work in relation to ethics, privacy and safety for the engagement of community members in design, research and evaluation activities, and focuses on the collection of new information. The annexes include tools, such as checklists, to help you meet ethical expectations. As a work in progress, the paper does not currently cover how to apply ethical, privacy and safety considerations to the use of existing information. 

Click here for more details.