Logan Together Progress Report released

Last week the Queensland Government released the Progress Report and ‘Statement of Achievement’ that Clear Horizon producedfor Logan Together. Logan Together is one of Australia’s largest and most well-known collective impact initiatives,involving over 100 partners working together to improve the well being of children (0-8 years) in Logan, Queensland.

The Progress Report provides a comprehensive assessment of Logan Together’s progress since inception, identifies recommendations for areas to strengthen, and celebrates the stories of success so far. For more about the background and commissioning of the Report click here.

What did the results show?

The findings evidenced that the Logan Together collective is making sound and positive progress towards the longer-term goals of their ‘Roadmap’ via a collective impact approach.  The collective had clearly contributed to community level and systemic changes, and the backbone team had played a catalyst and enabling role. Of importance, there was evidence of small-scale impact for families and children and early instances of change.

Outcomes for families and children include improved engagement of certain at-risk cohorts, such as women not accessing maternity services or families with young children experiencing tenancy difficulties and instability; improved parental awareness of childhood development needs and milestones in targeted communities; early instances of improvement in kindy enrolment for small cohorts; and changes resulting from increased reach of services.

Systems level changes include an increased cross-sector collaboration and breaking down of silos, integrated approaches to strategic delivery, innovating new services and models, changes in practice,shifts in mindset and attitudes, and early changes in resource flows. Logan Together also contributed to outcomes ‘beyond place’ through their advocacy and policy reform efforts.

In some cases, changes have been achieved that would not otherwise have happened and in other examples Logan Together has advanced the progress of outcomes being achieved. Logan Together has also made good progress in developing frameworks for shared measurement and learning, and is starting to generate a body of evidence around their collective work. 

The progress study is one example of the measurement, evaluation and learning work that Clear Horizon is doing with backbone organisations and a diverse range of collectives.

It was linked to the work Clear Horizon led for developing the Place-based Evaluation Framework, a national framework for evaluating PBAs (pending release).Like the progress study, the framework was commissioned by the Commonwealth and Queensland Governments, and partnered by Logan Together.

If you want to read more about the methods we used, including outcomes harvesting,see our case study.

Well done Logan Together, and thank you for the opportunity to work with you and your partners on your monitoring, evaluation and learning journey.

What’s missing in the facilities debate

Both facilities themselves and debates on their effectiveness have proliferated in recent years. Working papersblog posts, conference presentations (see panel 4e of the 2019 AAC), DFAT internal reviews, and even Senate Estimates hearings have unearthed strong views on both sides of the ledger. However, supporting evidence has at times been scarce.

Flexibility – a two-edged sword?

One root cause of this discord might be a lack of clarity about what most facilities are really trying to achieve – and whether they are indeed achieving it. This issue arises because the desired outcomes of facilities are typically defined in broad terms. A search of DFAT’s website unearths expected outcomes like “high quality infrastructure delivery, management and maintenance” or – take a breath – “to develop or strengthen HRD, HRM, planning, management, administration competencies and organisational capacities of targeted individuals, organisations and groups of organisations and support systems for service delivery.” While these objectives help to clarify the thematic boundaries of the facility (albeit fuzzily), they do not define a change the facility hopes to influence by its last day. This can leave those responsible for facility oversight grasping at straws when it comes to judging, from one year to the next, whether progress is on track.

Clearly, broad objectives risk diffuse results that ultimately don’t add up to much. With ‘traditional’ programs, the solution might be to sharpen these objectives. But with facilities – as highlighted by recent contributions to the Devpolicy Blog – this breadth is desired, because it provides flexibility to respond to opportunities that emerge during implementation. The decision to adopt a facility mechanism therefore reflects a position that keeping options open will deliver greater dividends than targeting a specific endpoint.

Monitor the dividends of flexibility

A way forward might be to better define these expected dividends of flexibility in a facility’s outcome statements, and then monitor them during implementation. This would require hard thinking about what success would look like – for each facility in its own context – in relation to underlying ambitions like administrative efficiency, learning, relationships, responsiveness to counterparts, or multi-sectoral coherence. If done well, this may provide those in charge with something firmer to grasp as they go about judging and debating the year-on-year adequacy of a facility’s progress, and will assist them to manage accordingly.

This is easier said than done of course, but one tool in the program evaluation toolkit that might help is called a rubric. This is essentially a qualitative scale that includes:

  • Criteria: the aspects of quality or performance that are of interest, e.g. timeliness.
  • Standards: the levels of performance or quality for each criterion, e.g. poor/adequate/good.
  • Descriptors: descriptions or examples of what each standard looks like for each criterion in the rubric.

In program evaluation, rubrics have proved helpful for clarifying intent and assessing progress for complex or multi-dimensional aspects of performance. They provide a structure within which an investment’s strategic intent can be better defined, and the adequacy of its progress more credibly and transparently judged.

What would this look like in practice?

As an example, let’s take a facility that funds Australian government agencies to provide technical assistance to their counterpart agencies in other countries. A common underlying intent of these facilities is to strengthen partnerships between Australian and counterpart governments. Here, a rubric would help to explain this intent by defining partnership criteria and standards. Good practice would involve developing this rubric based both on existing frameworks and the perspectives of local stakeholders. An excerpt of what this might look like is provided in Table 1 below.

Table 1: Excerpt of what a rubric might look like for a government-to-government partnerships facility

Standard Criterion 1: Clarity of partnership’s purpose Criterion 2: Sustainability of incentives for collaboration
Strong Almost all partnership personnel have a solid grasp of both the long-term objectives of the partnership and agreed immediate priorities for joint action. Most partnership personnel can cite significant personal benefits (intrinsic or extrinsic) of collaboration – including in areas where collaboration is not funded by the facility.
Moderate Most partnership personnel are clear about either the long-term objectives of the partnership or the immediate priorities for joint action. Few personnel have a solid grasp of both. Most partnership personnel can cite significant personal benefits of collaboration – but only in areas where collaboration is funded by the facility.
Emerging Most partnership personnel are unclear about both the long-term objectives of the partnership and the immediate priorities for joint action. Most partnership personnel cannot cite significant personal benefits of collaboration.

Once the rubric is settled, the same stakeholders would use the rubric to define the facility’s specific desired endpoints (for example, a Year 2 priority might be to achieve strong clarity of purpose, whereas strong sustained incentives for performance might not be expected until Year 4 or beyond). The rubric content would then guide multiple data collection methods as part of the facility’s M&E system (e.g. surveys and interviews of partnership personnel, and associated document review). Periodic reflection and judgments about standards of performance would be informed by this data, preferably validated by well-informed ‘critical friends’. Refinements to the rubric would be made based on new insights or agreements, and the cycle would continue.

In reality, of course, the process would be messier than this, but you get the picture.

How is this different to current practice?

For those wondering how this is different to current facility M&E practice, Table 2 gives you an overview. Mostly rubric-based approaches would enhance rather than replace what is already happening.

Table 2: How might a rubric-based approach enhance existing facility M&E practice?

M&E step Existing practice Proposed enhancement
Objective setting Facility development outcomes are described in broad terms, to capture the thematic boundaries of the facility.

Specific desired endpoints unclear.

Facility outcomes also describe expected dividends of flexibility e.g. responsiveness, partnership

Rubrics help to define what standard of performance is expected, by when, for each of these dividends

Focus of M&E data M&E data focuses on development results e.g. what did we achieve within our thematic boundaries? M&E data also focuses on expected facility dividends e.g. are partnerships deepening?
Judging overall progress No desired endpoints – for facility as a whole – to compare actual results against Rubric enables transparent judgment of whether – for the facility as a whole – actual flexibility dividends met expectations (for each agreed criterion, to desired standards).

Not a silver bullet – but worth a try?

To ward off allegations of rubric evangelism, it is important to note that rubrics could probably do more harm than good if they are not used well. Pitfalls to look out for include:

  • Bias: it is important that facility managers and funders are involved in making and owning judgments about facility performance, but this presents obvious threats to impartiality – reinforcing the role of external ‘critical friends’.
  • Over-simplification: a good rubric will be ruthlessly simple but not simplistic. Sound facilitation and guidance from an M&E specialist will help. DFAT centrally might also consider development of research-informed generic rubrics for typical flexibility dividends like partnership, which can then be tailored to each facility’s context.
  • Baseless judgments: by their nature, rubrics deal with multi-dimensional constructs. Thus, gathering enough data to ensure well-informed judgments is a challenge. Keeping the rubric as focused as possible will help, as will getting the right people in the room during deliberation, to draw on their tacit knowledge if needed (noting added risks of bias!).
  • Getting lost in the weeds: this can occur if the rubric has too many criteria, or if participants are not facilitated to focus on what’s most important – and minimise trivial debates.

If these pitfalls are minimised, the promise of rubrics lies in their potential to enable more:

  • Time and space for strategic dialogue amongst those who manage, oversee and fund facilities.
  • Consistent strategic direction, including in the event of staff turnover.
  • Transparent judgments and reporting about the adequacy of facility performance.

Rubrics are only ever going to be one piece of the complicated facility M&E puzzle. But used well, they might just contribute to improved facility performance and – who knows – may produce surprising evidence to inform broader debates on facility effectiveness, something which shows no sign of abating any time soon.

The post What’s missing in the facilities debate appeared first on Devpolicy Blog from the Development Policy Centre.

For details on the author, please click on the blog title immediately above, which will redirect you to the Devpolicy Blog.

Other recent articles on aid and development from devpolicy.org
The fate of leadership which aspires to leave no-one behind
Launch of the Development Studies Association of Australia
Managing the transition from aid: lessons for donors and recipients
Market systems and social protection approaches to sustained exits from poverty: can we combine the best of both?
Aid and the Pacific in the Coalition’s third term

Series on Indigenous evaluation- Connection and Community

This blog forms Part 2 of a 4 part blog series outlining the learnings and reflections of my two colleagues and I who were fortunate enough to attend the 2019 Indigenous Peoples Evaluation conference in Rorotura, New Zealand.

The conference was an inspiring and transformative experience for me, as a non-Indigenous person and evaluator. Having worked as a youth worker prior to entering into evaluation, I particularly resonated with the keynote speech given by Marcus Akuhata-Brown, who among many things, works with at-risk youth and young offenders. Marcus explored the topic of connection and its role in supporting young people and others when crisis occurs. The following is my short reflection, as a non-Indigenous person, on Marcus’s talk and it’s implications for evaluation and our society.

Stepping into the forest: Connection in community

Marcus used the analogy of a forest to explore the importance of connection in our society. He described the sensation of standing on the edge of a natural grown forest and stepping into it; feeling the sudden sensation of humidity and experiencing smells and sounds that you could not hear from the outside. This sensation can only occur as a result of the many interdependent and connected species surviving together and supporting each other as a whole. In contrast, Marcus compared this to stepping into a manufactured pine forest; with only a single species surviving neatly and independently. In the second instance you can smell a strong scent of pine but will not feel the same level of sensation that occurred in the first. Although the first forest is very complex and harder to understand when you look at the whole, you can appreciate a self-sustaining eco-system, which will survive for an incredibly long time with no human intervention.

This analogy is very powerful when reflecting on our own societies and the tendency, particularly in Western-dominated culture, to pursue objectivity, independency and scientific rationalism. Our desire to simplify, neaten things up and search for the absolute and independent truth results in a narrow understanding of communities and a separation of people from one another. Humans are complicated and communities are complex but what results is not something to shy away from;  instead, it is something to embrace and work with to better our societies as a whole.

Connection as a support system

Marcus also reflected on his work with young people who have fallen into crisis. In situations where the young person is well connected to their community or ancestors, they have a support system and somewhere to go to heal and get back on the feet. I know from my own personal experience of working with young people experiencing homelessness that in the case of a young person who does not have this level of connection, they can easily fall through the gaps in society, become isolated and lack an attachment to life. The vast majority of the young people I worked with at the homelessness crisis accommodation were there due to a lack of family and community support. When things went wrong for them they had to cope with this alone and were not ready.

Marcus therefore urged the audience and our society to, “separate yourself from what separates you from others,” and to let go of the things that don’t allow us to connect to place, such as phones, the internet and televisions etc.

Complexity & Evaluation Conference April 2019

I had the pleasure of being a co-convenor of this year’s evaluation and complexity conference along with Mark Cabaj and Kate McKegg hosted by Collaborate for Impact. The theme this year was “finding our way together”. We were particularly interested in participatory methods and Indigenous evaluation. The conference had two provocateurs – Skye, (who we are delighted has joined the Clear Horizon team) and Liz who provided questions and reflections at the end of each session from an Indigenous evaluation and adaptive leadership perspective respectively. There are lots of awesome resources here.

Zazie Tolmer from Clear Horizon, Mark Cabaj and Kate McKegg kicked off the conference with a plenary on what systems change is all about. They started with the cosmos and worked backwards. it was a rapid start into the subject matter and felt like we started where we left the conference last year. Next up was a presentation from the Kimberlie’s, Des and Christy introduced us to how a collective of Indigenous leaders – called Empowered Communities are approaching the work of systems change. They had some great resources to share.

I presented alongside Kerry Ferrance on “co-evaluation” sharing some of our latest thinking around a new take on participatory evaluation for systems change initiatives. We showcased the co-evaluation we conducted with GROW – a systems change initiative in Geelong that focuses on tackling disadvantage through mobilising local business to employ local workers. I was a bit nervous to be putting out there the idea and term co-evaluation for the first time, as I am shakily writing a book on this topic (I have some doubts as all shaky writers might understand). There was some really useful feedback, particularly that summative co-evaluation could offer an important contribution – especially when a systems change initiative has been mobilized from the community up – imposing an external evaluation on this sort of initiative can be particularly inappropriate, and here summative co-evaluation might serve as a great alternative.

Skye also worked with Nan Wehipeihana to produce a booklet on Indigenous evaluation.

Key takeaway messages and insights form the Clear Horizon team are:

·         There is a growing interest and body of knowledge on evaluating systems change that weaves together working with power, participation and complexity. 

·         The role we hold as evaluators needs further exploration and defining – our roles often expand out to change makers, sense makers and complex space holders.

·         As evaluators we want to disrupt systems and shift power too!

·         Participatory evaluation isn’t necessarily a decolonising approach. Indigenous people have their own legitimate forms of evaluation that shouldn’t be discounted and are a valuable addition to the toolkit.

·         Systems thinking a la Meadows, reminds us of the many elements that make up systems. This conference brought to light the many system ‘stocks’ that are traditionally ignored in particular Indigenous knowledge, ways of knowing, seeing and doing. Instead of being ‘capacity builders’ we need to become ‘capacity revealers’, to super charge the change effort.

·         We often feel safe within structures and guiding processes, but perhaps more work needs to be done to safeguard ethical evaluative practice.

·         Des and Christy reminded us of the importance of setting up early agreed governance arrangements and processes when working in system change efforts.

Skye will be sharing more of her reflections on the conference and the work she did with Nan in a coming blog. Watch this space for more on this topic.

Monitoring Evaluation & Learning (MEL) reflection

Our Monitoring Evaluation and Learning (MEL) course ran at the end of March (2019).  This is our most comprehensive course running over 5 days.

We had an enthusiastic group of participants from a variety of organisations including some of our newer staff.

Carina Calzoni was the lead trainer for the course with some of staff coming in to offer additional insights on areas of their expertise.

Carina reflects on the March MEL program:

“As a presenter, the main highlight was discussions and interaction with the course participants. They were all fully engaged throughout the course and were keen to learn. We had many very interesting and in-depth discussions about how and where to apply MEL in different settings and organisational contexts. It was also great having several Clear Horizon staff (Kaisha, Samiha, Caitlin and Ed) presenting different parts of the course. This really helped to maintain the momentum over the five days.”

If you want to gain extensive training on Monitoring and Evaluation our next MEL course is running in Melbourne from the 21st of October to the 25th of October.

GovComms podcast – Social problems & digital solutions

Jen Riley our Digital Transformation lead recently spoke with David Pembroke on a GovComms podcast.  The topic of discussion was Social problems & digital solutions.

Areas discussed in this episode:

  • The importance of simplicity in communication
  • Preparing an evaluation checklist
  • The qualities of a good evaluator
  • The shifting focus from output to outcomes
  • The impact of technology on change measurement
  • Making data work for us (and not vice versa)
  • Developing a toolkit for social change
  • Avoiding data overwhelm
  • Resources for those who want to learn more

Click here to listen to Jen’s podcast Social problems & digital solutions.

Series on Indigenous Evaluation

As part of Clear Horizon’s commitment to supporting Indigenous self-determination, three consultants traveled to Rotorua, New Zealand, to participate in the first Indigenous Peoples’ Conference on Evaluation. To say that we were privileged, humbled, moved and challenged would be an understatement.

We would like to acknowledge with sincere gratitude the hospitality, generosity, wisdom and insight extended to us by the conference organisers from Mā Te Rae, our hosts from the Ohomairangi Marae, the speakers, panelists and presenters as well as the broader community of Indigenous evaluators from around the world with whom we shared the space.  

The three days traversed high level ontological reflections regarding traditional knowledge from diverse world views and value systems, down to community-defined indicators for wellbeing and co-authored stories of change. Indigenous evaluators, social change advocates and Maori elders provided insights and raised important questions, prompting both personal and professional reflection and holding significant implications for the field of evaluation and its role in social change. For example: challenging the dominance of western paradigms and the structures perpetuating the exclusion of Indigenous voices; decolonising access to knowledge and ensuring data sovereignty; acknowledging the inter-generational experience of trauma for indigenous peoples; upholding self-determination for communities; and the critical centrality of people and place, relationships and connection, in supporting wellbeing and creating intergenerational change.

This blog marks the beginning of a series, delving into what we took away from the conference:  

·        Part 2: Connection and community

·        Part 3: We are a tree without roots   

·        Part 4: Self-determination as the defining principle  

Ultimately, altruistic intentions are insufficient. In the words of activist Lilla Watson

“If you have come here to help me, you are wasting your time. But if you have come because your liberation is bound up with mine, then let us work together.”

With this intention in mind, we move forward with humility, with curiosity, prepared to listen more and prepared to expose ourselves to situations in which we feel uncomfortable, but that allow us to expand our understanding of the communities, partners and clients we work with, in supporting and striving for meaningful social change. 

Spotlight on the big M

A reflection on the recent Office of Development Effectiveness (ODE) ‘Evaluation of DFAT Investment Level Monitoring Systems’ (Dec 2018)

Disclaimer: the views expressed below are solely those of Damien Sweeney and do not represent Clear Horizon’s.

The ODE recently released a report on DFAT Investment Level Monitoring Systems, with the purpose of improving how Australian Aid investments are monitored. The report focused on the design and use of monitoring systems by DFAT investment managers and managing contractors.

My first observation was the title of the report, specifically the term ‘monitoring systems’. Why? Because so often Monitoring is joined to Evaluation (M&E), which in my experience can (and often does) lead to confusion between what is monitoring, and what is evaluation…. sometimes with the focus shifting to evaluation, at the expense of monitoring. This confusion between the M and the E is most often seen in field/implementation staff, who are often responsible for the actual data collection on a day-to-day basis.

I’ve been reflecting on this issue a fair bit over the past decade, having provided M&E backstopping to programs facing a distinct lack of monitoring and adaptive management, as well from developing monitoring, evaluation and learning (MEL) frameworks and plans (the jargon and acronyms in this field!).

Differentiating between little ‘e’, and big ‘E’

Monitoring is commonly defined as the systematic collection of data to inform progress, whereas evaluation is a more periodic ‘evaluative’ judgement, making use of monitoring, and other information.

However, as the evaluation points out, good monitoring is critical for continual improvement, by managing contractors (and other implementers) and DFAT investment managers. Continual improvement through monitoring requires an evaluative aspect too, as managing contractors (field/implementation teams, M&E advisors, leadership) and DFAT investment managers reflect on progress, and make decisions to keep going, or adjust course. I refer to this regular reflection process as little ‘e’, as differentiated from more episodic assessment of progress against key evaluation questions, or independent evaluations, which is the big ‘E’ (in M&E).

Keeping monitoring systems simple

Einstein was credited with the quote “Everything should be made as simple as possible, but not simpler”. This should be a principle of all monitoring systems, as it will promote the ownership across all responsible parties, from M&E advisors who develop systems, to those that will collect data and use it for continual improvement.

I have often seen cases where field/implementation teams don’t understand, and therefore don’t feel ownership, of complex M&E systems. A literature review supporting the report (Attachment A) notes the that better-practice monitoring systems are kept as simple as possible to avoid the lack of implementation that generally accompanies complex monitoring systems (too many indicators, too much information, and resultant paralysis).

The need for a performance (and learning) culture

Interestingly but not surprisingly, a survey of managing contractors noted that ‘good news’ often took precedence. This goes back to the importance of a performance culture across DFAT and managing contractors (and subcontractors) that embraces the opportunity to learn and improve (safe-fail vs fail-safe). There needs to me more incentive for managing contractors and investment managers to reflect, learn and adapt, and not just focus on the positives.

The importance of fostering a strong performance (and learning) culture is expressed in the recommendations. Learning should not be from periodic evaluations, but a regular and continuous process, with the regularity of reflection driven by the operational context (more complex contexts requiring more regular reflections of what monitoring information is indicating). I know of investments where implementation staff will informally meet on a weekly or fortnightly basis to track progress and make decisions on how to improve delivery.

Building capacity

The literature review notes the importance of capacity of staff for effective monitoring. I like to use the term capability (knowledge and skills) along with capacity (time and resources), as both are required, and are yet distinct from each other. The literature review focused on the importance of managing contractors recruiting and holding on staff who could design and manage monitoring systems. However, my experience indicates that it is not the M&E advisors that are a constraint or enabler of good monitoring systems, but the ownership of the system by those that implement the programs. Therein, for me, lies a key to good monitoring systems – getting field/implementation staff on board in the design and review of monitoring systems, so that they understand what is to be collected and why, including how it helps their work, through improving performance.

What we’re doing at Clear Horizon to focus on monitoring emergent outcomes and facilitate adaptive management

Clear Horizon has been developing fit-for-purpose plans and tools for our partners and clients, linking theory and practice and continually reflecting and learning on how to improve this.

I’m currently work shopping with my Clear Horizon Aid Effectiveness colleagues how we can make M&E tables more clearly accentuate the M, and how this informs the E. More to come on that front! Byron Pakula will be presenting at next week’s Australasian Aid Conference a we developed titled ‘No plan survives contact with the enemy – monitoring, learning and evaluation in complex and adaptive programming’ that takes in key issues raised in ODE’s evaluation. So check that one out if you’re in Canberra.

What are your thoughts on ODE’s evaluation of monitoring systems?

BreathIn Blog 1 – What are we learning about ToC and reporting?

At a recent BreathIn session,…

(Wait a what? – A Breath In is like of community of practice, its where we get to stop and reflect collectively across the work we and others are doing to test, stretch and create ideas and practice)

… Jess Dart, Ellise Barkley, Mila Waise, Anna Powell, Liz Bloom and I gathered to reflect on what we have learnt recently about working on place-based initiatives, the generic theory of change model we have all had a hand developing our learnings around evaluation in the space.

And so what have learnt? What did we come up with? Here are our significant take-aways from the day:

1.       It is difficult to develop a generic Theory of Change model for place-based work. Because:

The transformative intent and complexity of the work does not lend itself to a single two dimensional diagram. There are many layers to the work, a common refrain during our discussion about the theory of Change model was ‘it happens across the model’ for example ‘leadership, that needs to be in every box of the Theory of Change’. Ellise shared a model that was developed by one of the groups she worked with. It was three dimensional, made of boxes, passage ways, levels and there were choices to be made as you navigated your way through it. I think where we got to is that place-based work is ultimately about transformation,  and that transformation needs to happen within each individual, at all levels (like a contagion) before we get instances and then widespread transformation at the system levels and see the benefits at the population level.

This transformation often happens at the interstices or gaps between the nodes in a system and within the nodes. Interstice can be physical or intangible, they can be literal or figurative gaps. Which is why you often hear people discuss a) the importance of relationships and intangibles in this work and b) the importance of experiencing the work to really understand it.

This work is intrinsically linked with movement building. This means that the work becomes inherently political and relational. It forces us to engage with the deep assumptions that underpin our own worldviews, those of others and those underpinning the systems we are trying to transform. We are often having to deconstruct and destroy what is, to rebuild ourselves, our system, our place towards what is desirable. To do this, it helps to take a learning stance when doing this work.

2.       Evaluation certainly needs repurposing and rethinking in this context

A common starting point with any kind of evaluation is to think through who it is for (audience). But when you are working on an initiative that aims to transform through collaboration the work belongs to everyone. There is not one primary audience but many audiences, everything is owned by everyone. Furthermore, a key purpose of the evaluation work seems to be to articulate, explain and demonstrate the work and its impacts. This requires looking at the whole, as well as the sum of the parts (see contribution point below). Writing to cater for these multiple narratives and audiences is a balancing act. In this context the relationship between communication and evaluation is much closer than evaluators usually like it to be.

A common tool used to guide an evaluation are key evaluation questions. Following on from our discussion about the theory of change the key evaluation question that came to mind was: How (well) are we transforming? The standard questions of efficiency and effectiveness are not really appropriate. For systems change initiatives, we know it takes a lot more time and resources to do this work, ten years seems to be a good start. We also know that over investing in clarifying outcomes can divert people from really working out what needs to be done. As the outcomes are still emerging, this is exactly what they are working out. In these adaptive initiatives the theory of change is never finished and never completely right – they need to keep evolving as we learn. Jess likes to say we need to keep them “loose and hold them lightly”.

A common issue in evaluation is how to address contribution. That is, the need to show the distinct lines of contribution for different partners, in terms of they are contributing to the observed changes and outcomes. This we had greater clarity on. Firstly, Clear Horizon’s “What Else Tool”, is useful starting point for thinking through your contribution story. Secondly, it is important to clearly distinguish between what the ‘backbone’ impacts and what the ‘collective’ impacts. For example, you may need to have two separate reports or nested reports, but you must acknowledge the different contributions.

Finally, I think we were all reminded of the evaluator’s opportunity (and maybe responsibility) to be an integral part of the transformation effort. This only underscores the importance of investing time to Breath in!

From Mila: Thanks Zazie for the opportunity to reflect on the Breath-in session, I could only add:

Whether we are exploring a common theory of change for Place-based initiatives, or reporting and evaluation for Collective Impact initiatives, my biggest take away was from the Breath-in session is the need to use an equity or social justice lens in our work, as much as scientific, partnership or public policy paradigms. Due to the complex nature of disadvantage and vulnerability experienced by children, young people and families, we are constantly required to adapt, think outside the box and test different interventions.

One thing we know for sure is that different ways of thinking and working are required in response to the variations in the context, circumstance and drivers within place. Families, communities and places are dynamic and our collective understanding of what is desirable, positive, acceptable or challenging for individuals and communities keeps on changing. Hence, developing a generic/common theory of change for initiatives working on tackling social issues at place, is complicated.

The guiding light in these circumstance and hopefully a common worldview that can help bridge the different disciplines and competing needs, are the concepts of human rights and equity that had been supporting individuals and communities to reach their full potential amidst the odds: access, equity, rights and participation.

BreathIn Blog 2 – Did we get to generative listening?

In parallel, to our reflections on our work through the Breath In sessions, we have been working out how we can do Breath Ins in a way that is worthwhile for all involved, that respects our associated ‘responsibilities’ and manages for some of the inherent conflicts present in the group.

Participating in the Breath In sessions are a CEO, three consultants, a government employee working in a central backbone and a backbone leader based out of a NGO. We come together well because we are all practitioners and all have a connection to Clear Horizon.

This is our third Breath In, and it feels as if after a rocky start, we have come to a much better place where some of the obvious conflicts have settled down. There is a much greater level of trust and understanding of each other and our different contexts. This is allowing us to have more open discussions … maybe even generative discussions.

Through place based work I have been introduced to different theories, one of them is Otto Sharmer’s change management theory, the Theory U which has a strong focus on listening as a means for transformation. He describes four levels of listening:

1.       Downloading – “yeah, I know that already..” re-confirm what I already know. (I-in-ego/Politeness) – Listening from the assumption that you already know what is being said, therefore you listen only to confirm habitual judgements.

2.       Factual – pick up new information…factual, debates, speak our mind (I-in-it/Debate) – Factual listening is when you pay attention to what is different, novel, or disquieting from what you already know.

3.       Empathic – see something through another person’s eyes, I know exactly how you feel. Forget my own agenda (I-in-thou/Inquiry) Empathic listening is when the speaker pays attention to the feelings of the speaker.  It opens the listener and allows an experience of “standing in the other’s shoes” to take place.  Attention shifts from the listener to the speaker, allowing for deep connection on multiple levels.

4.       Generative – “I can’t explain what I just experienced” (I-in-now/Flow) – This deeper level of listening is difficult to express in linear language.  It is a state of being in which everything slows down and inner wisdom is accessed. In group dynamics, it is called synergy.  In interpersonal communication, it is described as oneness and flow.

I found it useful to reflect on our Breath In journey through these four levels of listening. I can’t speak for everyone else, so from my perspective I have observed myself do the first level of listening really well! I think that I had moments of factual listening (comparing the work across our experiences), I think I had instances of empathetic listening with one person at a time and I’m not sure I was able to reach much beyond that.

I’m curious as to how everyone else felt. I’m also aware that the transformation needs to happen in each of us first before it can happen in the group. So I suppose I have some homework to do!

The above is a reflection on the deeper experience of the Breath In. At a different level, that of developing understanding and theory I think we achieved more than we have in the past. See previous blog!