BreathIn Blog 2 – Did we get to generative listening?

In parallel, to our reflections on our work through the Breath In sessions, we have been working out how we can do Breath Ins in a way that is worthwhile for all involved, that respects our associated ‘responsibilities’ and manages for some of the inherent conflicts present in the group.

Participating in the Breath In sessions are a CEO, three consultants, a government employee working in a central backbone and a backbone leader based out of a NGO. We come together well because we are all practitioners and all have a connection to Clear Horizon.

This is our third Breath In, and it feels as if after a rocky start, we have come to a much better place where some of the obvious conflicts have settled down. There is a much greater level of trust and understanding of each other and our different contexts. This is allowing us to have more open discussions … maybe even generative discussions.

Through place based work I have been introduced to different theories, one of them is Otto Sharmer’s change management theory, the Theory U which has a strong focus on listening as a means for transformation. He describes four levels of listening:

1.       Downloading – “yeah, I know that already..” re-confirm what I already know. (I-in-ego/Politeness) – Listening from the assumption that you already know what is being said, therefore you listen only to confirm habitual judgements.

2.       Factual – pick up new information…factual, debates, speak our mind (I-in-it/Debate) – Factual listening is when you pay attention to what is different, novel, or disquieting from what you already know.

3.       Empathic – see something through another person’s eyes, I know exactly how you feel. Forget my own agenda (I-in-thou/Inquiry) Empathic listening is when the speaker pays attention to the feelings of the speaker.  It opens the listener and allows an experience of “standing in the other’s shoes” to take place.  Attention shifts from the listener to the speaker, allowing for deep connection on multiple levels.

4.       Generative – “I can’t explain what I just experienced” (I-in-now/Flow) – This deeper level of listening is difficult to express in linear language.  It is a state of being in which everything slows down and inner wisdom is accessed. In group dynamics, it is called synergy.  In interpersonal communication, it is described as oneness and flow.

I found it useful to reflect on our Breath In journey through these four levels of listening. I can’t speak for everyone else, so from my perspective I have observed myself do the first level of listening really well! I think that I had moments of factual listening (comparing the work across our experiences), I think I had instances of empathetic listening with one person at a time and I’m not sure I was able to reach much beyond that.

I’m curious as to how everyone else felt. I’m also aware that the transformation needs to happen in each of us first before it can happen in the group. So I suppose I have some homework to do!

The above is a reflection on the deeper experience of the Breath In. At a different level, that of developing understanding and theory I think we achieved more than we have in the past. See previous blog!

DFAT Evaluation of Investment Monitoring Systems

At Clear Horizon, we have been grappling with how to effectively – and efficiently – improve the monitoring, evaluation and learning of programmes.  Over the many years of experience, and across the range of programmes and partners we work with, one thing remains abundantly clear: the quality of the monitoring is the cornerstone for effective evaluation, learning and programme effectiveness.  In the international development sector, which has some quite large investments that operate in extremely complex environments, monitoring remains even more important.

At the end of 2017, Byron’s new year’s resolution for 2018 was to “dial M for monitoring”, and to put even more emphasis on improved monitoring systems.  Having conducted stocktakes of MEL systems across a range of aid portfolios, and being involved in implementing or quality assuring over 60 Department of Foreign Affairs and Trade aid investments, really clear messages about what works and what doesn’t have emerged.  This culminated in the presentations at the 2018 Australian Aid Conference and 2018 Australian Evaluation Conference, where Byron and Damien presented on how to improve learning and adaptation in complex programmes by using rigorous evidence generated from the monitoring and evaluation systems.

So we at Clear Horizon welcome the findings and recommendations in DFAT’s Office of Development Effectiveness Evaluation of DFAT Investment Monitoring Systems 2018. Firstly, we welcome the emphasis on improved monitoring systems for investments – it is essential to improving aid effectiveness.  Secondly, we strongly agree that higher quality MEL systems are outcome focused, have strong quality assurance of data and evidence, and where the data services multiple purposes (i.e. accountability, improvement, knowledge generation). Thirdly, that partners and stakeholders that have a culture of performance oversight and improvement are essential – this needs to continue to be fostered both internally and externally.

To achieve this, as recommended, it is essential that technical advice and support is provided to programme teams, investment managers, and decision makers.  This need not be resource intensive, and must be able to demonstrate its own value for money.  However, what is extremely important in this recommendation is that the advice is coherent, consistent and context specific.  Too often we see a dependency on the programme team providing a singular generalist M&E person required to provide a gamut of advice – covering a range of monitoring approaches, evaluation approaches, different sectors, and sometimes even different countries.  Good independent advice often requires a range of people providing input on different aspects of monitoring, evaluation and learning – a reason at Clear Horizon why we have a panel of MEL specialists, with some focusing on evaluation capacity building, others on conducting independent evaluations, or those building MEL systems.

Standardising expectations and advice across aid portfolios of what constitutes good monitoring, evaluation and learning that is fit for purpose is essential for all of us.  We have been fortunate enough to be involved with developing different models of providing third party embedded design, monitoring and evaluation advice.  The ‘Quality and Improvement System Support’ approach provides consistent technical advice across entire aid portfolios, such as what has been developed for Indonesia; ‘Monitoring and Evaluation House’ in Timor Leste in partnership with GHD is based on a neutral broker approach to improving the use of evidence in programme performance; and the ‘Monitoring and Evaluation Technical Advisory Role’ in Myanmar places a stronger emphasis on supporting programme teams through technical and management support.

This report echoes our belief that more monitoring and evaluation is not necessarily the answer, but rather collaborating to do it better and breeding a culture of performance is ultimately what we are striving for.

2019 New Years resolution blog

The New Year has once again reared its head, leaving the dusty resolutions of 2018 on the cupboard shelf next to the re-gifted ‘bad santa’ present from last December’s Christmas party (unless you got home made sweets or condiments that is!!). Whether our Clear Horizonites had relaxing tropical holidays or productive working staycations here in Melbourne, all team members are ready and eager for and exciting 2019.

Last year saw Clear Horizon’s first steps (of many) into digital evaluation techniques, huge steps towards creating frameworks for evaluating place based initiatives and the fine tuning of Clear Horizon’s approach to evaluating co-design processes. Needless to say it was a big year! In 2019 we are looking ahead to hone in our participatory skills, move further into the digital space and build on the co-design work from 2018.

2019, we’re ready for you!

Some of our staff have shared their goals for this year.

Jen Riley, Digital Transformation Lead

“Digital Transformation super highway for Evaluation”

In 2019, I am looking forward to leading Clear Horizon in digitally transforming from the inside out. I want to learn more about artificial intelligence, machine learning and blockchain and what these new developments mean for the social sector. I am especially interested in how we harness the digital transformation super highway for evaluation and make data collection, reporting and evaluation more automated, agile and innovative to meet the demands of evaluating complex social issues. I am excited about getting the Clear Horizon Academy, an online digital learning space for co-evaluators, up and going and seeing Track2Change, our data visualisation and reporting platform become part of everything we do at Clear Horizon.

Kaisha Crupi, Research Analyst

“Breathing life into quantitative data”

In 2019, I would like to further work on my quantitative skills in an evaluation. As I enjoy bringing qualitative voices to life in an evaluation, I would like to work on my skills for quantitative data to ensure that this can also be done. It’s not just making pretty graphs and charts – it’s about making meaning of these numbers and polishing it to make them robust and as effective as can be.

Georgia Vague, Research Analyst

“Using the context that matters”

Being a new member of Clear Horizon in late 2018, my resolution for 2019 is two-fold. Firstly, I would like to strengthen my data-analysis skills, particularly strengthening how to analyse large amounts of data using the most appropriate, context specific techniques. Secondly I want to be able to gain confidence in my facilitation skills, particularly in participatory workshops. This means being aware of any unconscious bias that I might hold and really placing the client and participant voice in the centre of the evaluations.

Eunice Sotelo, Research Analyst

“Capacity development for all”

If 2018 was a big year of learning and discovery, 2019 is no different. In fact, I want to extend myself further – honing skills in facilitation and stakeholder engagement – while continuing to expand my evaluation toolkit. I’m also keen to dig deeper into capacity building, internally at Clear Horizon and with our clients. I think we can do better at making our practice more inclusive and accessible, and what better way than to ‘teach’ by example.

Ellise Barkey, Senior Principle

“Applying, trialling and improving our approaches to co-design”

In 2019 I am looking forward to continuing my learning with the inspired communities and partners around Australia working to create positive change for families, children and young people. My resolution is to deepen my understanding and practice of designing relevant and flexible approaches and tools that cater for the diverse learning and evaluation needs of these fabulous collectives driving place-based approaches and systems level change. Clear Horizon’s work last year developing the Place-based Evaluation Framework for the Commonwealth and Queensland Governments made good ground towards a relevant framework, and was a fascinating exercise as it was co-designed with many stakeholders. This year, I look forward to applying, trialling and improving on these approaches with partners and clients, and embracing a learning stance through the challenges and successes.

Jess Dart, CEO

“Building co-evaluation – getting everyone involved!”

In 2019 I want to think deeply about how we strengthen practice and tools around collaborative and participatory evaluation – the time has come to re-invigorate this practice! The world of co-design has really begun to make inroads, so the time is ripe to build the practice of co-evaluation. I am going to dedicate my year to it!  I would love to see more diverse stakeholders really engaging in planning and analysis and co-designing recommendations.

Victoria Pilbeam, Consultant

“Learn about and from Indigenous evaluation approaches”

In 2019, I want to learn about and from Indigenous approaches to evaluation. Our team is increasingly getting invited to work with Traditional Owners in natural resource management spaces. We need to understand Indigenous evaluation methodologies to engage respectfully and effectively with rights holders. More broadly in the Sustainable Futures team we are always evaluating at the interface between people and environment.  Evaluation methodologies based on a holistic understanding of people and nature could play an important role in informing our practice.

Qualitative Comparative Analysis – a method for the evaluator’s tool-kit

I recently attended a five-day course on Qualitative Comparative Analysis run by the Australian Consortium for Social and Political Research at the Australian National University. Apart from wanting to be a university student again, if only for a week, I wanted to better understand QCA and its use as an evaluation method.

QCA is a case-based method that attempts to bridge qualitative and quantitative analysis through capturing the richness and complexity of individual cases, while at the same time attempting to identity cross-case patterns. QCA does this through comparing factors across a number of cases in order to identify which combination/s of factors are most important for a particular outcome.

The strength of QCA is that enables evaluators to not only identity how factors combine together to generate a particular outcome, as outcomes are rarely due to one factor, but if there is only one combination of factors or several different combinations that can lead to the outcome of interest and in what contexts these combinations occur. QCA is also ideal for evaluations with medium-sized Ns (e.g. 5 to 50 cases), as in such a range there are often too many cases for evaluators to identify patterns across cases without a systematic approach, but too few cases for most statistical techniques.

I left the course with an understanding of QCA as a useful addition to our evaluation tool-kit. Apart from enabling evaluators to identify patterns across cases, it allows us to test theories of change and in particular, whether the relationship between intermediate outcomes and end of program outcomes holds true or if there are other factors required to achieve higher order outcomes. It can also be used to triangulate the findings of other methods, such as key success factors identified through a contribution analysis.

There are of course a number of limitations such as QCA requiring both expertise in applying the method and in-depth case knowledge, as well as the time needed to collect comparable data across cases and then returning to the data to further define factors and outcomes as contradictions arise when trying to identify cross-case patterns.

If you want a good overview of QCA, including the key steps for undertaking a QCA, check out:

And useful references for applying QCA in evaluation include:

Area’s to Consider When Delivering Training

As part of our internal staff capacity building at Clear Horizon, we organise fortnightly learning and development sessions. Last week we discussed adult learning principles and styles, and how these guide the facilitation process of training activities and workshops that we deliver.

In the 1970’s, Malcolm Knowles coined the term “andragogy” which refers to methods and principles used in adult education. Later in 1984, he identified six adult learning principles including:

  • The need to know: Adults need to know why they need to learn something before they learn it.
  • Self-concept: Adults like self-direction. They grow to be independent learners, responsible for their own decisions.
  • Experience: Adults come to training with a great deal of ‘life’ experience which should be drawn upon and used as a learning resource.
  • Readiness to learn: Adults are more ready and willing to learn things that are relevant to them and that may help them to cope with real life situations.
  • Orientation to learning: Adults learn best when they can immediately apply what they have learnt to real life situations.
  • Motivation: Adults learn best when they are motivated to do so with intrinsic motivators more effective motivators than extrinsic.

Additionally, adult learning styles are the other important area to consider when delivering training.  Adult learning styles refer to learning approaches that individuals naturally prefer to maximise their personal learning experience. Peter Honey and Alan Mumford, based upon the work of Kolb, have identified four adult learning styles including:

  • Activists are those people who learn by doing.
  • Reflectors are people who learn by observing and thinking about what happened.
  • Theorists are people who like to understand the theory behind the actions.
  • Pragmatists are people who need to be able to see how to put the learning into practice in the real world.

A question raised on the adult learning principles for non-western learners that may be dissimilar to these adult learning principles coined by Knowles. Please share your thoughts on the differences between western and non-western adult learning principles via our twitter @ClearHorizonAU.

Resources:

Mobbs, Richard. Honey and Mumford. Retrieved from: https://www2.le.ac.uk/departments/doctoralcollege/training/eresources/teaching/theories/honey-mumford

Adult Learning Australia. Retrieved from: https://ala.asn.au/adult-learning/the-principles-of-adult-learning/

Designing Rubrics for Evaluation

Designing Rubrics for Evaluation

Last week, I attended an Australasian Evaluation Society (AES) workshop on “Foundations of Rubric Design”. It was a thought-provoking workshop. Kystin Martens, the presenter of the workshop, explained and challenged our understanding about rubric design as well as presented some practical tips to develop and use rubrics properly in evaluation. Here are my key takeaway points from the workshop:

Why do we use rubrics in evaluation?

A rubric is a tool or matrix or guide that outlines specific criteria and standards for judging different levels of performance. These days, more and more evaluators are using rubrics to guide their judgement on program performance. Rubrics enable evaluators to transform data from one form to another form, for example from qualitative evidence into quantitative data. Rubrics also provide an opportunity to analyse and synthesis evidence into a general evaluative judgement transparently throughout the evaluation process.

Three systematic steps to create a rubric for evaluation

There are three logical steps to develop a rubric:

  • Establish criteria: criteria are dimensions or essential elements of quality for a given type of performance, for example criteria for a good presentation including content and creativity of the presentation; coherence and organization of the materials; speaking skills and participation/interaction with audience.
  • Construct standards: standards are scaled level of performance or gradations of quality or a rating of performance, for example a scale from poor, adequate to excellent or a scale from novice, apprentice, proficient to distinguished.
  • Build descriptor for each criterion and standard: descriptor is narrative or detailed description to articulate the level of performance or what the performance at each level of standard looks like, for example a poor speaking skill describes that the presenters were often inaudible and/or hesitant and relied heavily on notes; the presentation went over the required time, and some other descriptions.

Ensuring reliability of judgement and creating gold standards in evaluation

A calibration process is required so all evaluators will assess the program performance consistently and in alignment with the scoring rubric. This process will ensure that all evaluators will produce a similar evaluation score when assessing same program performance. This is a critical process to create gold standards for assessment and increase reliability of the assessment data. For example when we evaluate a multi-country development program and deploy more than one evaluator to assess the program, we need to make sure that all evaluators agree upon the rubric and understand the performance expectations expressed in the rubric thus they are able to interpret and apply the rubric consistently.

If you have any experience using rubrics in evaluation, please share and tweet your experience and thoughts with us @ClearHorizonAU.

References

Tools for Assessment. Retrieved from: https://www.cmu.edu/teaching/assessment/examples/courselevel-bycollege/hss/tools/jeria.pdf

Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners. Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1618692/

Roger, Patricia. Rubrics. Retrieved from: https://www.betterevaluation.org/en/evaluation-options/rubrics

Design & Evaluation – We’re Better Together

Last month Clear Horizon and The Australian Centre for Social Innovation (TACSI) had a great time delivering our new and sold out course on reconciling the worlds of Human Centred Design (HCD) & evaluation. Hot off the press, we’re proud to introduce the Integrated Design, Evaluation and Engagement with Purpose (InDEEP) Framework (Figure 1) which underpinned the course.

(Figure 1)

The InDeep Framework

The Integrated Design, Evaluation and Engagement (InDEEP) Framework (Figure 1) has been developed through reflection on two years of collaboration between TACSI (the designers) and Clear Horizon (the evaluators). At a high level the InDeep Framework conceptualises relationships between design and potential evaluation. Simply, the top half of the diagram sets out the design cycle in five phases (discover, define, prototyping, piloting and scaling). The bottom half of the diagram lists potential evaluative inputs that can be useful at each design phase (design research, developmental testing, pilot evaluation and broader impacts).

The journey starts by setting up the design and evaluation relationship for success, by carefully thinking through governance structures, role clarity and scope. In relation to scope consideration should first be given to the both which design phase the design project is currently in, or where it is expected to be when the evaluation occurs. Once the design phases of interest have been diagnosed the different types of evaluation needed can be thought through. Its definitely a menu of types of evaluation, and if you chose them all you’d be pretty full!

If the design project is in the discover to early prototyping phases it is likely that developmental evaluation approaches will be appropriate, in these phases evaluation should support learning and the development of the ideas. If the design project has moved to the late prototyping through to broader impacts phases’ then more traditional formative and summative evaluation may be more appropriate, in these phases there is more of an accountability towards achieving outcomes and impacts.

The InDEEP framework also acknowledges that there are some evaluative tools that are useful at all phases of the design cycle. In the diagram these are described as golden threads and include both modelling (e.g. Theory of Change) and facilitating learning (e.g. having a critical friend). If needed process and/or capability evaluation can also be applied at any design phase.

Key insights from the course

Course participants came from different sectors and levels of government. They represented a cohort of people applying UCD approaches to solve complex problems (from increasing student engagement in education through to the digital transformation of government services). At some level everyone came to the course looking for tools to help them demonstrate the outcomes and impact of their UCD work. Some key reflections from participants included:

  • Theory of Change is a golden thread
  • Evaluation can add rigour and de-risk design
  • It’s important to quarantine some space to for a helicopter view (developmental evaluation)
  • Be ready to capture your outcomes and impact

Theory of Change is a golden thread

Theory of Change proves to be one of our most versatile and flexible tools in design and evaluation. In the design process it’s able to provide direction in the scoping phase (broader goals), absorb learnings and insights in the discover phase (intermediate outcomes), and test out theories of action during prototyping. When you’re ready for a more meaty evaluation, Theory of Change provides you with a solid evaluand to: refine, test and or prove in the piloting and scaling phases. At all stages it surfaces the assumptions and is a useful communication devise.

Evaluation can add rigour and de-risk design

UCD is a relatively new mechanism being applied to policy development and social programming. It can take time for UCD processes to move through the design cycle to scaling and it is also assumed that some interventions may fail; this has the potential to make some funders nervous. Participants confirmed that developmental approaches which document key learnings and pivot points in design can help to communicate to funders what has been done. More judgmental process and capability building evaluation can also assist demonstrate to funders that innovation is on track.

It’s important to quarantine some space to for a helicopter view (developmental evaluation)

The discoverdefine and early prototyping phases are the realm of designers who primarily need space to be creative and ideate (be in the washing machine). In these early phases developmental evaluation can enable pause points for the design team to, zoom out, take stock of assumptions, and make useful adjustments to the design (a helicopter view of the washing machine). Although participants found the distinction between design and developmental evaluation useful they took away the challenge that design teams did not often distinguish the roles. One solution to this challenge was to rotate the developmental evaluation role within design teams, and or, resources permitting, bring an external developmental evaluator onto the design team.

Be ready to capture outcomes and impact

As designs move into late prototypingpiloting and scaling, teams come under increased pressure to document outcomes and impact. In many instances this is in some part to show funders what has been achieved through the innovation process. The key message for participants was that it was important to plan for capturing outcomes early on. One way to be ready is to have a Theory of Change. If you are expecting to have a population level impact it may also be important to set up a baseline early on. Equally if you are chasing more intangible outcomes like policy change then you should think through some techniques like Outcomes Harvesting and SIPSI so that you are ready to systematically make a case for causation.

We would love your feedback on our InDeep Framework, to join the conversation tweet us at @ClearHorizonAU. In you are interested in learning more we are running our InDEEP training again early next year, see our public training calendar for more details.

Tom Hannon and Jess Dart

Powerful insights and stories of MSC taking place across Africa!

Since joining, Clear Horizon earlier this year, I’ve really enjoyed engaging with clients on different ways to elicit and present program outcomes.  An interesting and compelling way to collect program outcomes is through personal stories of change using the Most Significant Change (MSC) technique.

Globally, there is increasing interest and application of the MSC technique from a range of organisations and funders, who recognise the value of narrative-based outcomes. MSC is a relatively versatile technique that can be used in many different contexts and sectors including international development, health, education and agriculture.

In May this year, I was privileged to facilitate three MSC workshops in Ghana, Zambia and Kenya with 60 alumni from the Australia Awards Africa Program, which provides post-graduate scholarships to Australian universities for up to two years. During the workshops, we reflected on and analysed the stories of significant change that had come about for the alumni since completing their studies and returning to Africa. The technique was chosen as it is a participatory form of monitoring and evaluation and does not start with any pre-defined indicators of success and so allows for unexpected (and even unintended) outcomes to be expressed.

The key elements of MSC that were used during these workshops included collection, review and selection of alumni stories. In each location four stories from 20 participants were selected as those describing the most significant changes. Together we analysed the themes from the 12 selected stories and found that themes of increased confidence, critical thinking skills and new opportunities were common in the selected stories. The ability to have impact or influence within and beyond the alumni’s immediate workplace to their communities, countries and even globally were also strong themes and align well with the Australia Awards program aims. And finally documenting why a story was chosen over others (ie why it was considered to be the most significant) is a key component of the technique, which elicits the underlying values that are represented in the stories of change.

Written by Marty Pritchard

What is Collaborative Outcomes Reporting?

Collaborative Outcomes Reporting (COR) is a participatory approach to impact evaluation. It centres on a performance story that presents evidence of how a program has contributed to outcomes and impacts. This performance story is then reviewed by both technical experts and program stakeholders, which may include community members.

Developed by Jess Dart of Clear Horizon, COR combines contribution analysis and Multiple Lines and Levels of Evidence (MLLE), mapping existing and additional data against the program logic to produce a performance story.  Performance story reports are essentially a short report about how a program contributed to outcomes. Although they may vary in content and format, most are short, describe the program context and aims, relate to a plausible results chain, and are backed by empirical evidence. The aim is to tell the ‘story’ of a program’s performance using multiple-lines of evidence.

COR adds processes of review, such as an expert panel or a summit process where stakeholders in the intervention, for example, community members, check for the credibility of the evidence about what impacts have occurred and the extent to which these can be credibly attributed to the intervention. It is these components of expert panel review (outcomes panel) and a collaborative approach to developing outcomes (through summit workshops) that differentiate COR from other approaches to outcome and impact evaluation.

Find out more about COR in a short paper by Dr Jess Dart here.

Can MSC play a role in program design?

In my most recent blog I explored a light bulb moment around Developmental Evaluation. Since then I have been musing about the role of the Most Significant Change Technique (MSC) in design and Developmental Evaluation. Although MSC certainly wouldn’t be the only tool you would use; I think MSC could be an exciting part of a Developmental Evaluator’s tool kit.

Developmental Evaluation is appropriate to use in innovative, complex and adaptive environments, it allows evaluation to occur even when the end point (or the path to get there) isn’t known. Developmental Evaluation sees the evaluator working collaboratively with the social entrepreneur (or other program designers) in the design phase of a new program. It is a way to get rapid feedback on the design and approach, and on how the program can be improved.

MSC can be an insightful tool for capturing emergent or unknown outcomes and helps us to make sense of impact and causality. A strength of MSC is that it enables the perspective of the user to be known and understood. Three ways we think MSC could assist to provide user feedback as part of a developmental evaluation include:

  • Using MSC for historical analysis
  • Using MSC to test social innovations
  • Using MSC to envisage alternative solutions

MSC for historical analysis

Using MSC in planning helps examine the historical context and how program participants have experienced and valued past interventions. In the past we have used MSC to inform the development of a community action strategy. The MSC question was broad. For example: “from your point of view, what is the most significant change that has resulted from any intervention in this community?” After the collection of MSC stories by trained volunteers, they were selected in large group settings. The outputs from this process were used to inform the situation analysis in a similar way to the technique Appreciative Inquiry. One difference between this usage of MSC in planning and Appreciative Inquiry, is that MSC stories do not necessarily seek only positive stories; they can also reveal the most significant negative changes. This information can be an insightful input into the design of a new program.

MSC for testing social innovations

When piloting a new social innovation, MSC can be used to help users to articulate the impact of the pilot on their lives and their communities. In this way it can be used to rapidly test possible innovations that are being piloted, particularly the immediate outcomes. The key here is allowing the users to interpret the benefits and negative impacts of the innovation in their words.

MSC for envisaging solutions

MSC has also been used in a future-orientated manner to help develop future vision and goals. In this context instead of collecting stories about the past, participants are invited to write a story about their ‘desired future’. The process follows six steps:

  • Set a future point in time – for example 5 years.
  • In a group setting, brainstorm a range of possible future scenarios that might arise from this program if it is successful (or unsuccessful).
  • Individually or in sub-groups, choose one scenario from this list that represents what you would most like to see happen.
  • Flesh this scenario out into a story – with a beginning, middle and end – as if it had already happened. For example, describe the changes that happen in a participant’s life, and what difference it made to them.
  • End with why you chose that particular scenario to write a story about.
  • You can then share stories and select the most significant one, but in so doing also develop a set of future outcomes you wish to see, and a set of values.

This technique is akin to scenario planning – or visioning. This is an accessible way to develop a future vision that is grounded in how people may see a new program.

MSC is a versatile M&E tool, which as well as being used to uncover the impacts of current or completed programs, can also be used to evaluate programs in the design phase – either by drawing out historical lessons, providing feedback on pilot interventions or through envisaging the desired future.

Clear Horizon is running public training on MSC in Melbourne next week (May 26th & 27th) and again in Perth on August 25th. For further information and to register head to the Clear Horizon Website.

Are you excited about using MSC as a tool to inform program design? To join the conversation please tweet your thoughts and tag us (@ClearHorizonAU).