What it takes to build a Liveable Company

If this year has proven anything, it’s that we are capable of change and of rising to new challenges. In that spirit, we’re challenging ourselves to better walk the talk and build a liveable company. But what does that mean?

Clear Horizon looks to Health Futures

How COVID-19 is forcing a re-examination of systemic injustices, and why we’ve created Health Futures to help social impact initiatives adapt.

With the onset of COVID-19, health has been spurred into public consciousness. Many of us have been subjected to the clinical and infection-control aspects of the disaster response: social distancing, masks, drive-through testing. But we’re also now made keenly aware of the complexities of public health, the way it overlaps sectors, and the way the pandemic is exacerbating many of the issues that social impact programs are already grappling to address.

For one, COVID-19 has brought home the importance of cultivating wellbeing – not just preventing illness – and the ways in which wellbeing is facilitated by social connection, stable income, and access to green space, among many other factors.

In addition, policy discussions surrounding the pandemic in Australia have surfaced the many pre-existing systemic inequities that also manifest as health inequities. We’re seeing renewed debate about an increasingly casualised workforce without sick leave, the privatisation and deregulation of aged care, the inadequacy of social benefits, the detention of people seeking asylum (and lack of a safety net for those living in the community), the cramped and inhospitable conditions of public housing, the difficulties facing those fleeing domestic violence, not to mention the widespread chronic illness (and other structural violence) experienced by First Nations peoples. The fact that health is also a justice issue has never been more apparent.

The social distancing requirements of the pandemic response have also accelerated the development and growth of digital health products and telehealth services. These initiatives are not just continuing to provide the same services in a different medium, but many are also taking advantage of their format to reach groups that previously may have been excluded (such as people in regional areas) and to intervene in new and creative ways. Many of these digital programs support behaviour change or provide resources, particularly in the areas of mental health, chronic and lifestyle diseases, substance addictions and diagnosis/screening.

Clear Horizon is responding to the emerging needs and opportunities in the social change sector through the creation of a dedicated Health Futures team specialising in innovative health and wellbeing initiatives. The new business group will focus on mental health promotion; chronic diseases such as cancer, stroke and obesity; substance addictions; domestic violence; and health systems intersections, for example connecting people to disability support services. We’re most keen to assist both domestic and international initiatives that are community-led, have a digital component, involve developmental evaluation, take a systems approach to change, and/or involve innovative health service designs and integration.

Our approach will be to tackle problems holistically, offering cross-sectoral expertise and a solid grounding in systems thinking. This enables us to better draw links between apparently unrelated processes or concepts, as well as to consider the effect of context on an initiative and how it might interact with broader systems. And as the demand for digital health initiatives continues to grow as part of the “new normal”, we’ll be drawing on our Digital team’s expertise to support the technical intricacies of developing and testing digital health products, as well as their privacy and data storage implications. This is a crucial consideration for confidential health data.

We’ve already had the opportunity to work on some cutting-edge projects in the health and wellbeing space. One of our long-term partnerships, with the Fay Fuller Foundation and The Australian Centre for Social Innovation, involves developmental evaluation services for Our Town, which is a twelve-year initiative that takes a community-led approach to addressing mental health and wellbeing. These meaningful, strategic partnerships are where we believe we can achieve the best health and wellbeing outcomes for people and communities, and the organisations looking to support them.

The Health Futures team is led by Dr Jess Dart (Founding Director), and our members are Samiha Barkat (Principal Consultant), Edgar Daly (Senior Consultant) and Alessandra Prunotto (Consultant). To find out more about our services and training, please get in touch.

The Cook and the Chef – How Designers & Evaluators can Collaborate or Clash in the Kitchen

What happens when Designers and Evaluators start cooking up social innovations together? Is it a case of the Cook and the Chef, where there’s collaboration throughout, or is it more My Kitchen Rules, with paring knives out? Here are four kitchen scenarios we’ve observed – but we want to know: which kitchen are you cooking in?

Part 2 – Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

In this article, we ask Dr Jess Dart, Anna Powell and Dr Ellise Barkley about their top tips for overcoming some of the key challenges of evaluating systems change and place-based approaches, and how to get everyone on the same songsheet.

The Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

A conversation with Dr Jess Dart, Anna Powell and Dr Ellise Barkley about the challenges and opportunities presented by systems change and place-based approaches, and why evaluators in the space are truly the jazz players of the evaluation world.

Developing a Theory of Change – is there a ‘right’ way?

Considered by many change-makers as the ultimate ‘road map’ when it comes to creating social impact, a good Theory of Change is an integral part of any intervention or initiative aiming to affect change. We asked experts from different fields about the approaches they use and uncovered some significant differences, which begged the question – is there a right way to develop a theory of change?

Changing self and system in COVID-19

This article was written by Anna Powell and Alessandra Prunotto

How do you change a system that is in a state of flux – and at times, in chaos?

The systems change initiatives we support at Clear Horizon are now tackling this question. Before COVID-19, the typical patterns of systems were very difficult to shift – in many respects, the systems we live and work in here in Australia have been built over more than 200 years. Now, they move like quicksilver.

The pandemic has required systems change-makers to reorient themselves in systems that are in flux. Change-makers are working out new opportunities and leverage points to affect positive change now. (Let’s not waste a good crisis.)

Of the many potential systems levers in this context, one meaningful area to focus on can be how you yourself show up in the system. While systems change leaders do this all the time, during a time of heightened instability working on yourself helps keeps you anchored to purpose and hold steady through the state of flux. Developmental evaluators can play a crucial role in supporting this reflective and action-orientated process.

You are the system

According to The Water of Systems Change by Kania, Kramer and Senge (2018), systems change requires change-makers to identify the intangible aspects of a system, including relationships, power distribution, institutional norms and constraints, attitudes, and assumptions.

One of these intangibles is mental models, a powerful lever for shifting the conditions holding a system in place. Mental models are ‘Habits of thought—deeply held beliefs and assumptions and taken-for-granted ways of operating that influence how we think, what we do, and how we talk (Kania, Kramer & Senge 2018).

Deeply implicit, mental models are tricky to identify, especially in yourself and those like you. They are also confronting to challenge, because you have to question the power structures that have shaped these mental models – often structures you have benefited from in some way.

But it has to happen. Systems change is ultimately change in people. Mental models are held by individuals, and individuals are fractals of a system. Learning and change on a personal and team level is a part of systems change.

Triple-loop learning for self-reflection

Reflection can be uncomfortable. In a group setting, it needs high levels of vulnerability and trust. This process can be softened by a neutral developmental evaluator playing the role of ‘curious’ friend – someone close enough to create a sense of warmth, but far back enough to identify cognitive dissonance.

These difficult reflections need to be systematic and driven by pertinent questions. Mark Cabaj’s paper Evaluating Systems Change Results introduces a concept called ‘triple-loop learning’ that we’ve found useful to be more deliberate when reflecting on the self in the system.

In our previous blog post, we explained how single and double loops of learning are useful for strategic learning in emergent contexts. Cabaj explains that while these loops deal respectively with learning about what we are doing (implementation) and what we are thinking (strategy/context), triple-loop learning deals with how we are being. It directs us to ask questions about our emotional triggers, our habitual responses, our social norms/dynamics and our individual and shared values and narratives.

Brave government: triple-loop learning in action

The value of this triple-loop learning was highlighted through our work in supporting a government client with their efforts to change the way they work with communities. They were aiming to shift an entrenched power dynamic in how government works with communities and transform ways of working within their own offices.

As our staff built a trusting relationship with this government team, we together identified their positioning of power and authority as a key systems lever. Before COVID-19, we began to help facilitate these reflections on how they were stepping into their power, moving more into the tricky space of personal and team adaptive work. Seeing senior officials confronting their own discomfort was a testament to their commitment to change.

The team identified that their adaptive work and reflection meant that they would be able to take forward new ways of being and working wherever they went in government. By starting to build a tolerance of the discomfort needed to learn and change, they were modelling the learning culture they wanted to see across government. They were moving towards a new mode of ‘being’ in a system that they could take with them no matter which department they found themselves in. A systematic triple-loop reflection process was key to identifying these valuable, yet intangible, moves toward meaningful change.

Adapting programs in a time of rapid change

COVID-19 is a time of flux. In a matter of months, we’ve seen social and economic upheaval across the globe.

In times of uncertainty and transition, change-making organisations need to be agile and responsive. Our survival and our ability to help others depends on it. With a few tweaks, a well-embedded measurement, evaluation and learning (MEL) system can be a vital resource for adapting programs in a time of rapid change.

Theory of change and learning loops

As practitioners, we are well aware of the benefits of MEL for adapting programs, albeit in more stable periods.

Theory of change processes help organisations work out what to do, what to measure, and how to set up the systems for collecting and interpreting data. Most programs use two “loops” of learning to make decisions, to use a concept popularised by Chris Argis in the early 90s.

The first is “single loop” learning, which is related to implementation. This is usually utilised by program staff closest to the action who will frequently reflect on what’s working and what’s not.

The second is “double loop” learning, which is related to broader strategy. This learning process is usually done by the decision-makers at the helm of an organisation, who need to reflect on whether the goals and activities are still relevant to the context and whether their big assumptions still hold true. These double loop decisions happen less often, maybe every few months or twice a year.

When the context is constantly shifting, there is an even greater need to take a learning and adaptive planning approach. In the context of COVID-19, we’ve identified two key changes to make. The first is to shorten the cycles of data collection and feedback. The second is to ramp up your double-loop learning.

Let’s unpack how you’d change your approach to learning and adaptive planning by walking through an example we often use to teach theory of change at Clear Horizon Academy: the carpark model.

An example of adaptive planning: the carpark model

Imagine that you’re the owner of a carpark in the CBD. Before the pandemic, your main challenge was that cars were being stolen and this was discouraging parkers. Your vision for your carpark was that it would be safe and profitable, free from car thieves and full of happy customers.

You had created a theory of change to transform your carpark, and through this process decided that adding CCTV cameras was the one major influencing activity that would increase the number of customers.

A diagram of the theory of change for the carpark model, showing causal pathways from the influencing activities to the broader goals.

The carpark model’s theory of change.

For a while, this had worked. Not only had the CCTV cameras deterred thieves, but they had also attracted more parkers. This had created more eyes in the carpark, creating ‘natural surveillance’ and reducing opportunity for theft. Times were good, and your change program was working.

Then, COVID-19 hit.

Now with everyone working from home, there are a lot less people driving into the city. There’s a hospital nearby, so a few workers are still parking here. But each day there are less customers, and theft is starting to spike.

You realise that COVID-19 has created major changes in your context that are inhibiting your goals. So you decide to shorten your cycles of data collection and feedback, with a focus on double loop learning.

You take the following steps:

Step 1

The first aim is to gather data about this new context, with more regularity. You set up systems for single loop learning: daily reports from the carpark attendants on their observations, as well as pulse surveys for customers to understand what is concerning them. You set up a weekly time to review CCTV footage to try to spot patterns in the thefts and understand their cause.

Step 2

Armed with information, now is the time to use double loop learning to revisit the theory of change. Are your broader goals still relevant? Do we still need a safe and profitable car park in this context?

You determine your goals are still relevant. However, you note there is a need to broaden the definition of safety. From your weekly scans, you realised your customers are concerned about hygiene in the carpark – so you redefine safety in terms of security as well as health.

Step 3

Now you’re moving toward adjusted goals, you need to reconsider your activities. Will they work and be effective in this new context? What might hinder or help each causal pathway to your goals?

You realise that one of your major assumptions is no longer holding up. In a pre-COVID-19 world, you assumed that more people in the carpark would create natural surveillance, discouraging thieves and attracting more customers.

But from your data scans, you find out that a bustling carpark is having the opposite effect! Customers are concerned about being close to others and having more people touching the ticket machines, which increases their chance of catching COVID-19.

Step 4

You come up with additional strategies to work with your revised assumptions. To balance between natural surveillance and social distancing, you prohibit parking in every second car space. You also install contactless payment technology and put up signs highlighting your public health-consciousness. As you begin to implement, you use your new single-loop feedback systems to check whether these strategies are working.

What we can learn from this example

For any organisation trying to continue their important change work, this approach can greatly help us with adapting programs in a time of rapid change like COVID-19. Regularly gathering implementation data and scheduling time to re-assess your strategy will set you up to do your best in this chaotic period.

M&E and international development in COVID-19

The world is moving under our feet. I am sure I am not the only person to feel it. Everyone is struggling to come to grips with how COVID-19 has changed our lives. I work in M&E within international development, and with the border closures, I cannot travel to work with partner organisations.

Even if I could travel, those partners are in emergency mode, responding to COVID-19 in the best way they can. Planned M&E activities are often put on the backburner for more pressing priorities.

When I have thought about the work they are doing, as well as the everyday heroism of nurses, doctors and cleaners, I have struggled at times with my own relevance as an M&E consultant in international development. How can I help during COVID-19?

In this blog, I want to share some of my initial thoughts on how M&E can help international development to respond to COVID-19. I say they are initial as they will evolve over the course of this pandemic. At Clear Horizon, we will continue to share and refine our thoughts on how M&E can support communities to respond, as well as adapt to the challenges and opportunities raised by this seismic change.

Supporting evidence-based decision-making

International development programs are delivering urgent materials such as Protective Personal Equipment (PPE) to partner countries or funding other organisations to do this. However, in the rush to deliver essential materials and services, setting up data systems that provide regular updates on activities, outputs, ongoing gaps and contextual changes are often neglected.

While understandable, setting up M&E systems to provide regular reporting is essential. Decision-making will be at best limited if it is not informed by evidence of what is happening on the ground.

For example, if we understand what programs/donors are funding, where and any ongoing gaps, we can better coordinate with other programs to ensure that we are not duplicating efforts. There is also an accountability element, as our work is often funded by the taxpayer. In a time where people are losing their jobs in record numbers, we have an obligation to report how their taxes are being spent.

We can learn from humanitarian responses on how to develop lean M&E systems. Examples include fortnightly situation reports which outline in one-to-two pages key results, next steps, and any relevant context changes.

There is also an opportunity to draw on digital technology to provide organisations with clearer oversight of their pandemic response. Online dashboards such as the ones we use through our Track2Change platform can provide organisations a close to real-time overview of where they are working, what they are delivering and key outcomes.

Driving learning and improvement

More importantly, M&E will help us to understand what works, in what contexts and for whom. We work in complex environments where there are multiple interventions working alongside one another, along with dynamic economic, political, cultural and environmental factors that interact with and influence these interventions. Non-experimental evaluation methods such as contribution analysis and process tracing help us to understand why an intervention may or may not work in certain contexts due to certain contributing factors, and thus what needs to be in place for this intervention to be effectively replicated elsewhere.

But M&E does not only support learning at set points in the program cycle, such as during a mid-term evaluation, but on an ongoing basis. For example, regular reflection points could be built into programming with program staff coming together, face-to-face or through online video conferencing. In these meetings, stakeholders can reflect on evidence collected in routine monitoring, as well as on rapid evaluative activities such as case-studies on successful or less-successful examples.

During these sessions, program M&E staff would facilitate program teams to reflect on this evidence to identify what is working well or not so well, why and what next. Ongoing and regular reflection also draws on the tacit knowledge of staff – that is, knowledge which arises from their experience – to inform program adaptation. This sort of knowledge is essential in dynamic and uncertain contexts where the pathway forward is not always clear and decisions may be more intuitive in nature.

Clarifying strategic intent

Thinking about your strategy is not necessarily at the top of your mind when responding to a pandemic. However, as the initial humanitarian phase of COVID-19 winds down, and we start to think about how to work in this new normal, program logic (or program theory/theory of change depending on your terminology) is a useful thinking tool for organisations.

Simply put, program logic helps organisations map out visually the pathways from their activities to the outcomes they seek to achieve. Program logic not only helps in the design of new programs to address COVID-19, but the repivoting of existing programs.

COVID-19 will require organisations to consider how they may have to adapt their activities and whether these may affect their programs’ outcomes. For example, an economic governance program may have to consider how to provide support to a partner government remotely and this may mean they have to lower their expectations of what is achievable. By articulating a program logic model, an organisation can then develop a shared understanding of their program’s strategic intent going forward.

You can, of course use logic to also map out and get clear about the COVID-19 response itself. For example, we used program logic to map out one country’s response to COVID-19 in order to identify what areas our client is working in and the outcomes they are contributing towards, such as improved clinical management and community awareness, so that they can monitor, evaluate and report on their progress.

Locally led development

This crisis poses an opportunity for local staff to have a larger role in evaluations. This not only builds local evaluation capacity, but enables local perspectives to have a greater voice.

Typically, international consultants fly in for a two-week in-country mission to collect data and then fly home and write the report. National consultants primarily support international consultants in on-ground data collection.

In Papua New Guinea, we are promoting a more collaborative process to M&E in which international and national consultants work together across the whole evaluation process from scoping, data collection, analysis and reporting. As national consultants are on the ground and better able to engage with key stakeholders, their insights and involvement are essential to ensuring that evaluation findings are practical and contextually appropriate.

But I think we should look for ways to go beyond this approach to one that has local staff driving the evaluation and us international consultants providing remote support when required. For example, we could provide remote on-call support throughout the evaluation process, as well as participate in key workshops and peer review evaluation outputs such as plans and reports.

This could be coupled with online capacity building for local staff that provides short and inexpensive modules on key evaluation topics. At Clear Horizon we are starting to think more about this sort of capacity building through our Clear Horizon Academy, which provides a range of M&E training courses including Introduction to Monitoring, Evaluation and Learning.

COVID-19 screwed your MEL plans? What next.

It might be the least of your worries in this time of COVID-19, but measurement, evaluation and learning (MEL) has a crucial role to play to help organisations adapt.

Understandably, in these last few weeks many social change and government organisations have been focusing on emergency response activities – moving their programs online, offering previously face-to-face services over videoconference or text, and making sure their communities don’t fall through the gaps as services pivot.

But as we settle in for the long haul, now’s the time to turn your attention back to those MEL plans that have gone out-of-date in the space of weeks. Focusing on MEL in a time like this can help you to ensure the changes in your organisation’s operations are still aligned to your broader goals. Or if needed, it can help you re-evaluate those goals entirely.

This is the first of a series of blogs that will explore exactly why MEL is so useful not just in ‘peacetime’, but also during this prolonged global crisis, where planning more than a few weeks ahead seems about as useful as sunbathing on a crowded beach.

In the following posts, we’ll take a more in-depth look at topics such as using M&E to adapt to COVID-19 in international development, why strategic learning is important for leading in complexity, and how to use Theory of Change to reorient your programs.

But in the meantime, here are some ways you can start to refamiliarise yourself with that MEL framework you might have left in the dust at the start of March.

What to do with that out-of-date MEL framework?

  1. Update the frequency and content of reporting. In the early days of a crisis we know it is critical to get the right information to the right people. Find out what data is useful to people, and when, the update your reporting systems. The chances are you may need to provide more regular information for a while at least.
  2. Revisit your Theory of Change. Update it with your activities being done online and look at what this does to your impact pathways and assumptions. You can use your Theory of change like a canvas to explore positive impact pathways under these new conditions.
  3. Update your intermediate outcomes. You may need to be adding new short-term outcomes such as immediate relief, stress levels, material safety, or knowledge of new government benefits.
  4. Update your measures where needed. Instead of tracking attendees at a session you might instead track the number of online visits, chats, etc.
  5. Reconsider your data collection methods. You may need to move to more qualitative methods during this phase of COVID-19 because life for many in vulnerable situations maybe changing so rapidly that there will be many unintended outcomes that your service will contribute to. Use Most Significant Change method or another method to solicit what is changing and how your service has contributed to those changes.
  6. Think about automating data collection. Going online with your programs offers a world of automation opportunity. If you are offering some sort of chat response to your clients or a collaborative space, you can automate the attendees list. You can also send out small surveys that can be automated either through text message or, as we prefer for security reasons, sending people a Microsoft Forms link.

But a word of warning with privacy: make sure that whatever platform you are using to engage your clients is secure. You can read more about digital evaluation and privacy considerations here.

Stay tuned for the next blog post in this series.