Posts

Hands Up Mallee – The community taking children’s wellbeing into their own hands

We’ve all heard is takes a village to raise a child. But what does that look like in practice? Hands Up Mallee offers us a glimpse – bringing together the community, organisations and services to support children and families to thrive. Here’s how we’re helping them achieve greater community health and wellbeing outcomes.

The Cook and the Chef – How Designers & Evaluators can Collaborate or Clash in the Kitchen

What happens when Designers and Evaluators start cooking up social innovations together? Is it a case of the Cook and the Chef, where there’s collaboration throughout, or is it more My Kitchen Rules, with paring knives out? Here are four kitchen scenarios we’ve observed – but we want to know: which kitchen are you cooking in?

Part 2 – Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

In this article, we ask Dr Jess Dart, Anna Powell and Dr Ellise Barkley about their top tips for overcoming some of the key challenges of evaluating systems change and place-based approaches, and how to get everyone on the same songsheet.

The Jazz Players of the Evaluation World: Meet our experts on Systems-Change and Place-Based Approaches.

A conversation with Dr Jess Dart, Anna Powell and Dr Ellise Barkley about the challenges and opportunities presented by systems change and place-based approaches, and why evaluators in the space are truly the jazz players of the evaluation world.

Logan Together – “Game-Changer” for Place-Based Evaluation

Clear Horizon was commissioned by the Commonwealth and Queensland Governments to lead the co-design of the Place-Based Evaluation Framework. The framework was tested with Logan Together, a place-based initiative aiming to shift the dial on developmental outcomes for local children aged 0-8.

While evaluation practice of place-based approaches (including collective impact initiatives) has been developing over the last 5-10 years in Australia, until now there has not been a consolidated guide for practice in Australia. Evaluating place-based approaches is challenging due to the emergent and complex nature of the work, and they often include embedded ‘systems change’ efforts which are difficult to measure.

Enter Clear Horizon. We worked with over 150 stakeholders to lead the co-design the Place-based Evaluation Framework, in collaboration with delivery partners The Australian Centre for Social Innovation (TACSI), Collaboration for Impact (CFI), and Community Services Industry Alliance (CSIA). The Place-based Evaluation Framework is available through the Department of Social Services website, and has a toolkit of over 80 tools useful for evaluating systems change and place-based approaches.

To test the framework, we worked in close partnership with Logan Together as proof of concept, using the Place-based Evaluation Framework to create a measurement, evaluation and learning strategy for the collective impact initiative.

While Logan Together had many enablers for success set up – with an experienced backbone team, committed funders and partners and an engaged community – there was no agreed plan for how to measure success in the short to medium term, or framework for determining whether or not they were on track.

Recognising there is no ‘one-size-fits-all’ evaluation methodology, we developed clear and phase-appropriate evaluation systems and measures of success, incorporating learning and community voice into the evaluative process.

Our approach ensured we captured the complexity of the work, while still providing a practical roadmap to keep partners and community on track. And it’s setting the standard for place-based and collective impact evaluation across the country.

“The framework is a game changer for how government (is) approaching and evaluating place-based initiatives.

It has created some much-needed standards and guidance for community change movements looking to better understand their progress and set shared expectations with their partners and funders.”

Matthew Cox, Director, Logan Together

 

Resilient Sydney

Clear Horizon’s Sustainable Futures team are working with the Resilient Sydney Office to develop an M&E Framework for the five-year Resilient Sydney Strategy.

The Strategy aims to strengthen Sydney’s capacity to prepare for, respond to and recover from disaster, whilst ensuring all of Sydney’s communities can access opportunities to thrive. The Strategy aims to effect change across the systems of the city to achieve these objectives, and is being delivered through collaborative initiatives, underpinned by a collective impact model for systemic change.

With the Strategy’s focus on systemic change, collaboration and collective impact, the Sustainable Futures team have been developing an M&E Framework informed in part by the Place-based Evaluation Framework (Dart, 2018)  This will ensure the Strategy’s M&E will work with the phased nature of systems change, and across the different scales of change.  In addition, to align with the collective impact model used, the Framework distinguishes between the work and outcomes of the backbone organisation (i.e. the Resilient Sydney Office) and those of the broader partnership.

Working with the Resilient Sydney Office on this M&E Framework has been a really exciting opportunity for our team for a number of reasons.  The first is the clear alignment in the passion and vision of our team for driving real and positive change.  The second is that the complexity that the Strategy is dealing with demands that we continue to innovate, test and refine our M&E approaches, to ensure they remain useful and fit-for-purpose, and can meaningfully engage with the complexity of evaluating influence on systems change.  We are thoroughly enjoying the challenges this project has thrown at us and are excited to see where it goes next!

Human Development Monitoring and Evaluation Services contract for 2019-2023

Clear Horizon in partnership with Adam Smith International has been awarded by the Australian Government’s Department of Foreign Affairs and Trade the Papua New Guinea – Human Development Monitoring and Evaluation Services contract for 2019-2023.  We are really excited to be working with the HDMES team to be based in Port Moresby, along with ASI, DFAT, the Government of Papua New Guinea, and the partners across the health and education sectors.

Health and education are central to PNG’s sustainable development. The health system struggles to meet the needs of it growing population, while the country’s education system lacks sufficient funding, well trained teachers and administrators, and the planning and management necessary to effectively utilise its limited resources.  Australia, as the largest donor to PNG, aims to make a positive contribution to the health and education systems.  Between 2015 and 2018, $264.4m was specifically allocated to education and $276.9m to health investments. In health, this focuses on workforce planning, communicable diseases, family planning, sexual and reproductive health, and maternal and child health. While, in education, Australia’s objective is to support teachers to improve the quality of teaching and learning, improve primary school infrastructure, and reduce the barriers that prevent children attending and staying at school for a quality education.

Through HDMES, Adam Smith International in collaboration with Clear Horizon will provide external, independent M&E Services to DFAT and GoPNG regarding health and education investments at the program and portfolio levels. Support will include developing portfolio level strategies, performance assessment frameworks and annual reports; advising on baselines and M&E frameworks for programs; quality assuring design, monitoring and evaluation deliverables; and conducting independent evaluations of DFAT investments.

Logan Together Progress Report released

Last week the Queensland Government released the Progress Report and ‘Statement of Achievement’ that Clear Horizon producedfor Logan Together. Logan Together is one of Australia’s largest and most well-known collective impact initiatives,involving over 100 partners working together to improve the well being of children (0-8 years) in Logan, Queensland.

The Progress Report provides a comprehensive assessment of Logan Together’s progress since inception, identifies recommendations for areas to strengthen, and celebrates the stories of success so far. For more about the background and commissioning of the Report click here.

What did the results show?

The findings evidenced that the Logan Together collective is making sound and positive progress towards the longer-term goals of their ‘Roadmap’ via a collective impact approach.  The collective had clearly contributed to community level and systemic changes, and the backbone team had played a catalyst and enabling role. Of importance, there was evidence of small-scale impact for families and children and early instances of change.

Outcomes for families and children include improved engagement of certain at-risk cohorts, such as women not accessing maternity services or families with young children experiencing tenancy difficulties and instability; improved parental awareness of childhood development needs and milestones in targeted communities; early instances of improvement in kindy enrolment for small cohorts; and changes resulting from increased reach of services.

Systems level changes include an increased cross-sector collaboration and breaking down of silos, integrated approaches to strategic delivery, innovating new services and models, changes in practice,shifts in mindset and attitudes, and early changes in resource flows. Logan Together also contributed to outcomes ‘beyond place’ through their advocacy and policy reform efforts.

In some cases, changes have been achieved that would not otherwise have happened and in other examples Logan Together has advanced the progress of outcomes being achieved. Logan Together has also made good progress in developing frameworks for shared measurement and learning, and is starting to generate a body of evidence around their collective work. 

The progress study is one example of the measurement, evaluation and learning work that Clear Horizon is doing with backbone organisations and a diverse range of collectives.

It was linked to the work Clear Horizon led for developing the Place-based Evaluation Framework, a national framework for evaluating PBAs (pending release).Like the progress study, the framework was commissioned by the Commonwealth and Queensland Governments, and partnered by Logan Together.

If you want to read more about the methods we used, including outcomes harvesting,see our case study.

Well done Logan Together, and thank you for the opportunity to work with you and your partners on your monitoring, evaluation and learning journey.

What’s missing in the facilities debate

Both facilities themselves and debates on their effectiveness have proliferated in recent years. Working papersblog posts, conference presentations (see panel 4e of the 2019 AAC), DFAT internal reviews, and even Senate Estimates hearings have unearthed strong views on both sides of the ledger. However, supporting evidence has at times been scarce.

Flexibility – a two-edged sword?

One root cause of this discord might be a lack of clarity about what most facilities are really trying to achieve – and whether they are indeed achieving it. This issue arises because the desired outcomes of facilities are typically defined in broad terms. A search of DFAT’s website unearths expected outcomes like “high quality infrastructure delivery, management and maintenance” or – take a breath – “to develop or strengthen HRD, HRM, planning, management, administration competencies and organisational capacities of targeted individuals, organisations and groups of organisations and support systems for service delivery.” While these objectives help to clarify the thematic boundaries of the facility (albeit fuzzily), they do not define a change the facility hopes to influence by its last day. This can leave those responsible for facility oversight grasping at straws when it comes to judging, from one year to the next, whether progress is on track.

Clearly, broad objectives risk diffuse results that ultimately don’t add up to much. With ‘traditional’ programs, the solution might be to sharpen these objectives. But with facilities – as highlighted by recent contributions to the Devpolicy Blog – this breadth is desired, because it provides flexibility to respond to opportunities that emerge during implementation. The decision to adopt a facility mechanism therefore reflects a position that keeping options open will deliver greater dividends than targeting a specific endpoint.

Monitor the dividends of flexibility

A way forward might be to better define these expected dividends of flexibility in a facility’s outcome statements, and then monitor them during implementation. This would require hard thinking about what success would look like – for each facility in its own context – in relation to underlying ambitions like administrative efficiency, learning, relationships, responsiveness to counterparts, or multi-sectoral coherence. If done well, this may provide those in charge with something firmer to grasp as they go about judging and debating the year-on-year adequacy of a facility’s progress, and will assist them to manage accordingly.

This is easier said than done of course, but one tool in the program evaluation toolkit that might help is called a rubric. This is essentially a qualitative scale that includes:

  • Criteria: the aspects of quality or performance that are of interest, e.g. timeliness.
  • Standards: the levels of performance or quality for each criterion, e.g. poor/adequate/good.
  • Descriptors: descriptions or examples of what each standard looks like for each criterion in the rubric.

In program evaluation, rubrics have proved helpful for clarifying intent and assessing progress for complex or multi-dimensional aspects of performance. They provide a structure within which an investment’s strategic intent can be better defined, and the adequacy of its progress more credibly and transparently judged.

What would this look like in practice?

As an example, let’s take a facility that funds Australian government agencies to provide technical assistance to their counterpart agencies in other countries. A common underlying intent of these facilities is to strengthen partnerships between Australian and counterpart governments. Here, a rubric would help to explain this intent by defining partnership criteria and standards. Good practice would involve developing this rubric based both on existing frameworks and the perspectives of local stakeholders. An excerpt of what this might look like is provided in Table 1 below.

Table 1: Excerpt of what a rubric might look like for a government-to-government partnerships facility

Standard Criterion 1: Clarity of partnership’s purpose Criterion 2: Sustainability of incentives for collaboration
Strong Almost all partnership personnel have a solid grasp of both the long-term objectives of the partnership and agreed immediate priorities for joint action. Most partnership personnel can cite significant personal benefits (intrinsic or extrinsic) of collaboration – including in areas where collaboration is not funded by the facility.
Moderate Most partnership personnel are clear about either the long-term objectives of the partnership or the immediate priorities for joint action. Few personnel have a solid grasp of both. Most partnership personnel can cite significant personal benefits of collaboration – but only in areas where collaboration is funded by the facility.
Emerging Most partnership personnel are unclear about both the long-term objectives of the partnership and the immediate priorities for joint action. Most partnership personnel cannot cite significant personal benefits of collaboration.

Once the rubric is settled, the same stakeholders would use the rubric to define the facility’s specific desired endpoints (for example, a Year 2 priority might be to achieve strong clarity of purpose, whereas strong sustained incentives for performance might not be expected until Year 4 or beyond). The rubric content would then guide multiple data collection methods as part of the facility’s M&E system (e.g. surveys and interviews of partnership personnel, and associated document review). Periodic reflection and judgments about standards of performance would be informed by this data, preferably validated by well-informed ‘critical friends’. Refinements to the rubric would be made based on new insights or agreements, and the cycle would continue.

In reality, of course, the process would be messier than this, but you get the picture.

How is this different to current practice?

For those wondering how this is different to current facility M&E practice, Table 2 gives you an overview. Mostly rubric-based approaches would enhance rather than replace what is already happening.

Table 2: How might a rubric-based approach enhance existing facility M&E practice?

M&E step Existing practice Proposed enhancement
Objective setting Facility development outcomes are described in broad terms, to capture the thematic boundaries of the facility.

Specific desired endpoints unclear.

Facility outcomes also describe expected dividends of flexibility e.g. responsiveness, partnership

Rubrics help to define what standard of performance is expected, by when, for each of these dividends

Focus of M&E data M&E data focuses on development results e.g. what did we achieve within our thematic boundaries? M&E data also focuses on expected facility dividends e.g. are partnerships deepening?
Judging overall progress No desired endpoints – for facility as a whole – to compare actual results against Rubric enables transparent judgment of whether – for the facility as a whole – actual flexibility dividends met expectations (for each agreed criterion, to desired standards).

Not a silver bullet – but worth a try?

To ward off allegations of rubric evangelism, it is important to note that rubrics could probably do more harm than good if they are not used well. Pitfalls to look out for include:

  • Bias: it is important that facility managers and funders are involved in making and owning judgments about facility performance, but this presents obvious threats to impartiality – reinforcing the role of external ‘critical friends’.
  • Over-simplification: a good rubric will be ruthlessly simple but not simplistic. Sound facilitation and guidance from an M&E specialist will help. DFAT centrally might also consider development of research-informed generic rubrics for typical flexibility dividends like partnership, which can then be tailored to each facility’s context.
  • Baseless judgments: by their nature, rubrics deal with multi-dimensional constructs. Thus, gathering enough data to ensure well-informed judgments is a challenge. Keeping the rubric as focused as possible will help, as will getting the right people in the room during deliberation, to draw on their tacit knowledge if needed (noting added risks of bias!).
  • Getting lost in the weeds: this can occur if the rubric has too many criteria, or if participants are not facilitated to focus on what’s most important – and minimise trivial debates.

If these pitfalls are minimised, the promise of rubrics lies in their potential to enable more:

  • Time and space for strategic dialogue amongst those who manage, oversee and fund facilities.
  • Consistent strategic direction, including in the event of staff turnover.
  • Transparent judgments and reporting about the adequacy of facility performance.

Rubrics are only ever going to be one piece of the complicated facility M&E puzzle. But used well, they might just contribute to improved facility performance and – who knows – may produce surprising evidence to inform broader debates on facility effectiveness, something which shows no sign of abating any time soon.

The post What’s missing in the facilities debate appeared first on Devpolicy Blog from the Development Policy Centre.

For details on the author, please click on the blog title immediately above, which will redirect you to the Devpolicy Blog.

Other recent articles on aid and development from devpolicy.org
The fate of leadership which aspires to leave no-one behind
Launch of the Development Studies Association of Australia
Managing the transition from aid: lessons for donors and recipients
Market systems and social protection approaches to sustained exits from poverty: can we combine the best of both?
Aid and the Pacific in the Coalition’s third term

Design & Evaluation – We’re Better Together

Last month Clear Horizon and The Australian Centre for Social Innovation (TACSI) had a great time delivering our new and sold out course on reconciling the worlds of Human Centred Design (HCD) & evaluation. Hot off the press, we’re proud to introduce the Integrated Design, Evaluation and Engagement with Purpose (InDEEP) Framework (Figure 1) which underpinned the course.

(Figure 1)

The InDeep Framework

The Integrated Design, Evaluation and Engagement (InDEEP) Framework (Figure 1) has been developed through reflection on two years of collaboration between TACSI (the designers) and Clear Horizon (the evaluators). At a high level the InDeep Framework conceptualises relationships between design and potential evaluation. Simply, the top half of the diagram sets out the design cycle in five phases (discover, define, prototyping, piloting and scaling). The bottom half of the diagram lists potential evaluative inputs that can be useful at each design phase (design research, developmental testing, pilot evaluation and broader impacts).

The journey starts by setting up the design and evaluation relationship for success, by carefully thinking through governance structures, role clarity and scope. In relation to scope consideration should first be given to the both which design phase the design project is currently in, or where it is expected to be when the evaluation occurs. Once the design phases of interest have been diagnosed the different types of evaluation needed can be thought through. Its definitely a menu of types of evaluation, and if you chose them all you’d be pretty full!

If the design project is in the discover to early prototyping phases it is likely that developmental evaluation approaches will be appropriate, in these phases evaluation should support learning and the development of the ideas. If the design project has moved to the late prototyping through to broader impacts phases’ then more traditional formative and summative evaluation may be more appropriate, in these phases there is more of an accountability towards achieving outcomes and impacts.

The InDEEP framework also acknowledges that there are some evaluative tools that are useful at all phases of the design cycle. In the diagram these are described as golden threads and include both modelling (e.g. Theory of Change) and facilitating learning (e.g. having a critical friend). If needed process and/or capability evaluation can also be applied at any design phase.

Key insights from the course

Course participants came from different sectors and levels of government. They represented a cohort of people applying UCD approaches to solve complex problems (from increasing student engagement in education through to the digital transformation of government services). At some level everyone came to the course looking for tools to help them demonstrate the outcomes and impact of their UCD work. Some key reflections from participants included:

  • Theory of Change is a golden thread
  • Evaluation can add rigour and de-risk design
  • It’s important to quarantine some space to for a helicopter view (developmental evaluation)
  • Be ready to capture your outcomes and impact

Theory of Change is a golden thread

Theory of Change proves to be one of our most versatile and flexible tools in design and evaluation. In the design process it’s able to provide direction in the scoping phase (broader goals), absorb learnings and insights in the discover phase (intermediate outcomes), and test out theories of action during prototyping. When you’re ready for a more meaty evaluation, Theory of Change provides you with a solid evaluand to: refine, test and or prove in the piloting and scaling phases. At all stages it surfaces the assumptions and is a useful communication devise.

Evaluation can add rigour and de-risk design

UCD is a relatively new mechanism being applied to policy development and social programming. It can take time for UCD processes to move through the design cycle to scaling and it is also assumed that some interventions may fail; this has the potential to make some funders nervous. Participants confirmed that developmental approaches which document key learnings and pivot points in design can help to communicate to funders what has been done. More judgmental process and capability building evaluation can also assist demonstrate to funders that innovation is on track.

It’s important to quarantine some space to for a helicopter view (developmental evaluation)

The discoverdefine and early prototyping phases are the realm of designers who primarily need space to be creative and ideate (be in the washing machine). In these early phases developmental evaluation can enable pause points for the design team to, zoom out, take stock of assumptions, and make useful adjustments to the design (a helicopter view of the washing machine). Although participants found the distinction between design and developmental evaluation useful they took away the challenge that design teams did not often distinguish the roles. One solution to this challenge was to rotate the developmental evaluation role within design teams, and or, resources permitting, bring an external developmental evaluator onto the design team.

Be ready to capture outcomes and impact

As designs move into late prototypingpiloting and scaling, teams come under increased pressure to document outcomes and impact. In many instances this is in some part to show funders what has been achieved through the innovation process. The key message for participants was that it was important to plan for capturing outcomes early on. One way to be ready is to have a Theory of Change. If you are expecting to have a population level impact it may also be important to set up a baseline early on. Equally if you are chasing more intangible outcomes like policy change then you should think through some techniques like Outcomes Harvesting and SIPSI so that you are ready to systematically make a case for causation.

We would love your feedback on our InDeep Framework, to join the conversation tweet us at @ClearHorizonAU. In you are interested in learning more we are running our InDEEP training again early next year, see our public training calendar for more details.

Tom Hannon and Jess Dart