AES International Evaluation conference Day 2!

It’s been an amazing AES conference so far – lots of interesting topics and great conversations. Like Jess, the highlight for me so far has been the key note speaker from day one, Dr Tracey Westerman – an Aboriginal woman from the Pilbara in WA. She has been a trail blazer in Aboriginal mental health. The key take away message for me was that measurement matters – but even more importantly, the right measures matter. She described that in many cases of Aboriginal youth suicide, there was been no prior mental health assessment. But when assessment tools are used, they are western based and not culturally appropriate. This can lead to misdiagnosis. For over 20 years, Tracey has argued that it is not appropriate to ‘modify’ existing measures because of their inherent racism. The only way is to develop new tools from the ground up. Tracey has developed seven tools specifically for Aboriginal youth mental health with not a lot of funding – no easy feat. It was a truly inspiring presentation from an amazingly passionate and optimistic woman who really cares about her people.

A highlights from day 2 was a panel of designers and evaluators from Australia and New Zealand: Jess Dart, Kate McKegg, Adian Field, Jenny Riley and Jacqueline (Jax) Wechsler, who explored how we might move out of the traditional box of program evaluation, to make a bigger difference. They discussed the role of evaluators in supporting people to move beyond measuring to think though whether we are doing the right things and whether we are really making a difference across complex systems. Questions were covered such as where can evaluators add value in a co-design process, does evaluation get in the way and slow things down, do evaluators need new skills to help analyse and make sense of big data? Jenny reminded us that evaluators are learners and we are curious, and that we need to get on board with the digital revolution.

One really interesting concurrent session I attended was on the use of Rubrics by Julian King, Kate McKegg, Judy Oakden and Adrian Field. They presented the basics of rubric and then described how rubrics can be a tool for democratising evaluative reasoning, stakeholder engagement and communicating of results. They presented two very different examples– one in a developmental evaluation and the other was using rubrics to evaluate the value for money of an agricultural funding program. I found the second example particularly interesting having experienced the challenges of answering the value for money question. Using a rubric in this way is great for balancing the multiple dimensions of value from different perspectives.

Another memorable moment was at an ignite session (which is a really short presentation). Damien Sweeny and Dave Green from Clear Horizon did a great job at presenting a rather convincing argument for placing more emphasis on monitoring over evaluation – Big M vs small e as they call it. And they cheekily suggested changing the name of the AES to AMeS. An interesting thought.

The day finished with a second keynote speaker, Gary VanLandingham, from the Askey School of Public Administration and Policy. He reminded us of the vast amount of evaluative information available through ‘What Works’ warehouses. They are a good place to start when starting an evaluation, but there are warnings. The main caution for me is that they privilege certain types of data over others, and they don’t include what doesn’t work or things not measured using experimental approach (such as randomised control trials, and quasi-experimental methods).

The day was topped off by a short ferry ride to Luna Park where we had a lovely conference dinner overlooking the opera house. Sydney is a very beautiful city and a great setting for a wonderful conference.

Now for day three….

Have you visited our booth at the conference?

AES International Evaluation Conference Day 1!

Dr Tracey Westerman, proud Aboriginal woman, had me totally gripped throughout her keynote presentation on Day 1 at the AES International Evaluation Conference. She began with the statistic that Australia has the highest rate of child suicide in the word. But she cautioned us to be optimistic and focus also on the positive outcomes that are occurring, such as the six young Aboriginal people graduating from medicine in WA this year alone. She stressed that education is the most powerful solution in the world and described how in one generation her family ‘closed the gap’ – she gained a doctorate despite living very remotely, with a family background of very limited formal education.

She walked us through the importance of developing assessment tools that are culturally sensitive and to avoid confusing causes with risk factors. It seems many tools in existence today are culture-blind and can lead to stereotyping and actual discrimination. She has developed a whole range of specific assessment tools that are culturally sensitive, including an assessment tool for work-based culture. She made the case that there hasn’t been the right sort of research with Aboriginal people, that the causes are different, and need to be assessed in culturally sensitive ways.

She’s working on ‘building an army’ of Indigenous psychologists across the country to address child suicide and influence child protection. She ended with the note that there is nothing that can’t be achieved by Aboriginal people if they believe in themselves.

After this I had the privilege of moderating a panel about the client-consultant relationship, a topic dear to my heart and my business! The panel was from the Victorian Department of Education and training (DEET), as well as consultants from Melbourne Uni and Deloitte Access Economics. DEET have set up a ‘state of the art’ supplier panel with over 30 suppliers on it, and are working to deepen the partnership between evaluation suppliers and commissioners, as well as embedding a culture of evaluation. They were generous in sharing their challenges, including lots of tricky moments around data sharing and IP.

Just before lunch I had the pleasure of indulging in a session of evaluation theory led by Brad Astbury and Andrew Hawkins from (ARTD). The explored the seven great thinkers of evaluation, who laid the foundations of our seven-decade-long journey of building our theoretical foundations. So lovely to wallow in theory, I remember savouring that learning when studying for my PhD. Their conversation was framed around Foundations of Program Evaluation by ShadishCook and Leviton (1991) – it was my evaluation textbook back then and good to see it’s still valued!

It was a busy day for me, and I also convened a fun panel on digital disruption. We had a great spread of panellists, with a designer from Paper Giant, Ruben Stanton; a data scientist, Kristi Mansfield; a social procurement expert, Chris Newman; as well as our very own Chief Innovation Officer, Jenny Riley. We explored the trends, the opportunities and the scary stuff that might come with the fourth industrial revolution – yes, the robots are here! I saw a few people jump in their seats when Jen described how Survey Monkey stores data overseas and is not bound by the same data security regulations. We also looked into the amazing opportunities for evaluators to be the sensemakers of Big Data. When the firehose of data hits you in the face, maybe the evaluators will be there to bring you calmly back to the most important questions. We also explored the opportunities for evaluators to get involved in evaluating new technology and innovation, and to help consider how ready the system is to receive these innovations. I didn’t want it to end!

The day was topped off with a keynote from David Fetterman on empowerment evaluation. Empowerment evaluation is now 26 years old! David explained how empowerment evaluation is a self-evaluation approach designed to help people help themselves. It felt familiar!

Progress 2019

Progress 2019 was held over 2 days in June to address the pressing social and environmental issues the world is currently facingThe conference is a biennial event at which progressive thinkers and change makers in the social and environmental space come together to discuss current issues. Over 1500 attendees from Australia and around the world attended the event at Melbourne Town HallHeadline speakers included Anat Shenker-Osorio, Ellie Mae O’Hagan, Bruce Pascoe, Kumi Naidoo, Owen Jones and Behrouz Boochani. The two-day program was emceed by Yassmin Abdel-Magied. 

Kaisha Crupi and Shani Rajendra represented Clear Horizon at Progress 2019 in a bid to understand how Clear Horizon can better engage in this space. Their key learnings are as follows:

  1. We work in complex and messy systems that are tricky to navigate

We found that intersectionality was a common thread throughout this year’s conference. The key speakers noted that Australia cannot achieve true social change unless social change includes all Australians, especially the interests of First Nations people. Panels such as the “First Nations Justice” and “First Nations Land Justice highlighted the significant gaps in current social change movements. Similarly, “A Very Human Climate Crisis” panel reinforced that Australia cannot achieve true social change unless it is coupled with environmental change. Progress 2019 drove home the message that our everyday challenges in the social and environmental change space are complex and multi-faceted in nature. It was an important reminder that we all need to work collaboratively across different initiatives to tackle such complex issues.

The discussion on intersectionality tied well with the other discussions in the conference around embedding a greater understanding of complex systems in which we operate into progressive practice. In his keynote address, Kumi Naidoo, the Secretary-General of Amnesty International, referenced the role of systems change. The three requirements where the world needs to push the boundaries include system re-design, system innovation and system transformation. With these requirements, we will be in a better position to achieve social and environmental change. Kumi’s sentiments were also echoed by Lyn Morgain, Chief Executive at CoHealth who stated that:

“We need to acknowledge that transformation is iterative, dynamic and murky.” 

For us working at Clear Horizon, the discussion of intersectionality and systems thinking was really exciting for us as this is how we like to work. Clear Horizon prefers to work with organisations who are already working across multi-faceted issues, particularly where social and environmental justice intersect. This also includes systems-thinking projects and organisations.

  1. Working in complexity is best done through strong partnerships

Each panel discussion at Progress reinforced the importance of partnerships in trying to achieve social and environmental change. At the “Philanthropists & Changemakers: Effective Partnerships to Win Systemic Change” conversation, the panellists discussed that although developing relationships and partnerships are challenging, true partnership is rewarding once it is fully developed. It was noted that the key element is to come to an agreed understanding, in order to build trust. This includes the fact that building trust takes time and that understanding what everyone wants to achieve from the partnership (both as individuals and as together) takes careful research and negotiation. We learnt that all organisations can benefit from developing strong partnerships with like-minded organisations and working together to achieve a common goal.

From this discussion, it reinforced how lucky we are to work in a space which already has already existing partnerships, which have formed over the years and are now incredibly strong. Clear Horizon has long-standing relationships with several organisations. We have found that by working in increasingly complex environments, the need to have a wide range of strong partnerships helps have more of a social and environmental impact, rather than if we were going at it alone.

  1. We need to make space for marginalised voices

We were reminded throughout the conference about the importance of creating a space for people from underrepresented groups in the social and environmental change space. Speakers at Progress highlighted the often used and sometimes tokenistic “let them speak” approach to inclusion which is used a lot in current practice. Instead, change makers should be looking towards organisations who are providing a supportive platform and tools in which people from underrepresented groups can choose to use when speaking for themselves. Phillip Winzer from Seed said that the question that organisations should be asking themselves is:

“What can we do to support Aboriginal and Torres Strait Islander people to implement the solutions they want?”

This question made us reflect on how organisations currently work, both at Clear Horizon and other organisations we work with. We also thought about current Clear Horizon strategies in how to recruit people from underrepresented communities as well as how we work with these communities. This includes people who advising us on how to better create spaces for people to feel welcome and accepted.

Progress taught us that no matter how progressive or values-driven organisations are, to achieve social and environmental change, they need to be proactive in involving underrepresented groups. As Kumi Naidoo said in his keynote address:

“If progressive organisations do not internalise fully the challenges we face, then we are part of the problem.”

  1. We need to think about how we communicate 

Progress also showed us that social and environmental sectors need to think about how they connect with wider audiences. This includes talking using more accessible and plain language so everyone can understand. This includes not speaking in jargon, providing image descriptions for people with a visual impairment, or communicating with drawings and cartoons rather than text. Anat Shenker-Osorio, Founder and Principal of ASO Communications did a presentation on communication in campaigning which highlighted the need for strengths-based communication that avoids “othering”. This was a quite important learning for organisations who are trying to use alternative voices in the spaces that they work in.

From our Clear Horizon perspective, we understood that we can further develop our practices. This includes how to communicate in our workshops with people with a disability (through digital storytelling or verbal visual descriptions), not using our evaluation and design (and sometimes sector-specific) jargon as well as having clear messaging and metrics of success.

Conclusion 

Progress 2019 was a thought-provoking two-days. A major learning for us was how Clear Horizon could include more diverse voices in our spaces. This could be done through more meaningful partnerships and working on having jargon free, accessible communication. We felt that as a values-based, mission-driven organisation, the benefit for Clear Horizon in continuing to engage with progressive movements such as Progress is a must-need. We also realised that our current work in systems thinking, monitoring and evaluation, and design provide us with a useful toolkit to contribute to the social and environmental justice space. We hope to work with more organisations striving for this in the future.

Human Development Monitoring and Evaluation Services contract for 2019-2023

Clear Horizon in partnership with Adam Smith International has been awarded by the Australian Government’s Department of Foreign Affairs and Trade the Papua New Guinea – Human Development Monitoring and Evaluation Services contract for 2019-2023.  We are really excited to be working with the HDMES team to be based in Port Moresby, along with ASI, DFAT, the Government of Papua New Guinea, and the partners across the health and education sectors.

Health and education are central to PNG’s sustainable development. The health system struggles to meet the needs of it growing population, while the country’s education system lacks sufficient funding, well trained teachers and administrators, and the planning and management necessary to effectively utilise its limited resources.  Australia, as the largest donor to PNG, aims to make a positive contribution to the health and education systems.  Between 2015 and 2018, $264.4m was specifically allocated to education and $276.9m to health investments. In health, this focuses on workforce planning, communicable diseases, family planning, sexual and reproductive health, and maternal and child health. While, in education, Australia’s objective is to support teachers to improve the quality of teaching and learning, improve primary school infrastructure, and reduce the barriers that prevent children attending and staying at school for a quality education.

Through HDMES, Adam Smith International in collaboration with Clear Horizon will provide external, independent M&E Services to DFAT and GoPNG regarding health and education investments at the program and portfolio levels. Support will include developing portfolio level strategies, performance assessment frameworks and annual reports; advising on baselines and M&E frameworks for programs; quality assuring design, monitoring and evaluation deliverables; and conducting independent evaluations of DFAT investments.

Logan Together Progress Report released

Last week the Queensland Government released the Progress Report and ‘Statement of Achievement’ that Clear Horizon producedfor Logan Together. Logan Together is one of Australia’s largest and most well-known collective impact initiatives,involving over 100 partners working together to improve the well being of children (0-8 years) in Logan, Queensland.

The Progress Report provides a comprehensive assessment of Logan Together’s progress since inception, identifies recommendations for areas to strengthen, and celebrates the stories of success so far. For more about the background and commissioning of the Report click here.

What did the results show?

The findings evidenced that the Logan Together collective is making sound and positive progress towards the longer-term goals of their ‘Roadmap’ via a collective impact approach.  The collective had clearly contributed to community level and systemic changes, and the backbone team had played a catalyst and enabling role. Of importance, there was evidence of small-scale impact for families and children and early instances of change.

Outcomes for families and children include improved engagement of certain at-risk cohorts, such as women not accessing maternity services or families with young children experiencing tenancy difficulties and instability; improved parental awareness of childhood development needs and milestones in targeted communities; early instances of improvement in kindy enrolment for small cohorts; and changes resulting from increased reach of services.

Systems level changes include an increased cross-sector collaboration and breaking down of silos, integrated approaches to strategic delivery, innovating new services and models, changes in practice,shifts in mindset and attitudes, and early changes in resource flows. Logan Together also contributed to outcomes ‘beyond place’ through their advocacy and policy reform efforts.

In some cases, changes have been achieved that would not otherwise have happened and in other examples Logan Together has advanced the progress of outcomes being achieved. Logan Together has also made good progress in developing frameworks for shared measurement and learning, and is starting to generate a body of evidence around their collective work. 

The progress study is one example of the measurement, evaluation and learning work that Clear Horizon is doing with backbone organisations and a diverse range of collectives.

It was linked to the work Clear Horizon led for developing the Place-based Evaluation Framework, a national framework for evaluating PBAs (pending release).Like the progress study, the framework was commissioned by the Commonwealth and Queensland Governments, and partnered by Logan Together.

If you want to read more about the methods we used, including outcomes harvesting,see our case study.

Well done Logan Together, and thank you for the opportunity to work with you and your partners on your monitoring, evaluation and learning journey.

What’s missing in the facilities debate

Both facilities themselves and debates on their effectiveness have proliferated in recent years. Working papersblog posts, conference presentations (see panel 4e of the 2019 AAC), DFAT internal reviews, and even Senate Estimates hearings have unearthed strong views on both sides of the ledger. However, supporting evidence has at times been scarce.

Flexibility – a two-edged sword?

One root cause of this discord might be a lack of clarity about what most facilities are really trying to achieve – and whether they are indeed achieving it. This issue arises because the desired outcomes of facilities are typically defined in broad terms. A search of DFAT’s website unearths expected outcomes like “high quality infrastructure delivery, management and maintenance” or – take a breath – “to develop or strengthen HRD, HRM, planning, management, administration competencies and organisational capacities of targeted individuals, organisations and groups of organisations and support systems for service delivery.” While these objectives help to clarify the thematic boundaries of the facility (albeit fuzzily), they do not define a change the facility hopes to influence by its last day. This can leave those responsible for facility oversight grasping at straws when it comes to judging, from one year to the next, whether progress is on track.

Clearly, broad objectives risk diffuse results that ultimately don’t add up to much. With ‘traditional’ programs, the solution might be to sharpen these objectives. But with facilities – as highlighted by recent contributions to the Devpolicy Blog – this breadth is desired, because it provides flexibility to respond to opportunities that emerge during implementation. The decision to adopt a facility mechanism therefore reflects a position that keeping options open will deliver greater dividends than targeting a specific endpoint.

Monitor the dividends of flexibility

A way forward might be to better define these expected dividends of flexibility in a facility’s outcome statements, and then monitor them during implementation. This would require hard thinking about what success would look like – for each facility in its own context – in relation to underlying ambitions like administrative efficiency, learning, relationships, responsiveness to counterparts, or multi-sectoral coherence. If done well, this may provide those in charge with something firmer to grasp as they go about judging and debating the year-on-year adequacy of a facility’s progress, and will assist them to manage accordingly.

This is easier said than done of course, but one tool in the program evaluation toolkit that might help is called a rubric. This is essentially a qualitative scale that includes:

  • Criteria: the aspects of quality or performance that are of interest, e.g. timeliness.
  • Standards: the levels of performance or quality for each criterion, e.g. poor/adequate/good.
  • Descriptors: descriptions or examples of what each standard looks like for each criterion in the rubric.

In program evaluation, rubrics have proved helpful for clarifying intent and assessing progress for complex or multi-dimensional aspects of performance. They provide a structure within which an investment’s strategic intent can be better defined, and the adequacy of its progress more credibly and transparently judged.

What would this look like in practice?

As an example, let’s take a facility that funds Australian government agencies to provide technical assistance to their counterpart agencies in other countries. A common underlying intent of these facilities is to strengthen partnerships between Australian and counterpart governments. Here, a rubric would help to explain this intent by defining partnership criteria and standards. Good practice would involve developing this rubric based both on existing frameworks and the perspectives of local stakeholders. An excerpt of what this might look like is provided in Table 1 below.

Table 1: Excerpt of what a rubric might look like for a government-to-government partnerships facility

Standard Criterion 1: Clarity of partnership’s purpose Criterion 2: Sustainability of incentives for collaboration
Strong Almost all partnership personnel have a solid grasp of both the long-term objectives of the partnership and agreed immediate priorities for joint action. Most partnership personnel can cite significant personal benefits (intrinsic or extrinsic) of collaboration – including in areas where collaboration is not funded by the facility.
Moderate Most partnership personnel are clear about either the long-term objectives of the partnership or the immediate priorities for joint action. Few personnel have a solid grasp of both. Most partnership personnel can cite significant personal benefits of collaboration – but only in areas where collaboration is funded by the facility.
Emerging Most partnership personnel are unclear about both the long-term objectives of the partnership and the immediate priorities for joint action. Most partnership personnel cannot cite significant personal benefits of collaboration.

Once the rubric is settled, the same stakeholders would use the rubric to define the facility’s specific desired endpoints (for example, a Year 2 priority might be to achieve strong clarity of purpose, whereas strong sustained incentives for performance might not be expected until Year 4 or beyond). The rubric content would then guide multiple data collection methods as part of the facility’s M&E system (e.g. surveys and interviews of partnership personnel, and associated document review). Periodic reflection and judgments about standards of performance would be informed by this data, preferably validated by well-informed ‘critical friends’. Refinements to the rubric would be made based on new insights or agreements, and the cycle would continue.

In reality, of course, the process would be messier than this, but you get the picture.

How is this different to current practice?

For those wondering how this is different to current facility M&E practice, Table 2 gives you an overview. Mostly rubric-based approaches would enhance rather than replace what is already happening.

Table 2: How might a rubric-based approach enhance existing facility M&E practice?

M&E step Existing practice Proposed enhancement
Objective setting Facility development outcomes are described in broad terms, to capture the thematic boundaries of the facility.

Specific desired endpoints unclear.

Facility outcomes also describe expected dividends of flexibility e.g. responsiveness, partnership

Rubrics help to define what standard of performance is expected, by when, for each of these dividends

Focus of M&E data M&E data focuses on development results e.g. what did we achieve within our thematic boundaries? M&E data also focuses on expected facility dividends e.g. are partnerships deepening?
Judging overall progress No desired endpoints – for facility as a whole – to compare actual results against Rubric enables transparent judgment of whether – for the facility as a whole – actual flexibility dividends met expectations (for each agreed criterion, to desired standards).

Not a silver bullet – but worth a try?

To ward off allegations of rubric evangelism, it is important to note that rubrics could probably do more harm than good if they are not used well. Pitfalls to look out for include:

  • Bias: it is important that facility managers and funders are involved in making and owning judgments about facility performance, but this presents obvious threats to impartiality – reinforcing the role of external ‘critical friends’.
  • Over-simplification: a good rubric will be ruthlessly simple but not simplistic. Sound facilitation and guidance from an M&E specialist will help. DFAT centrally might also consider development of research-informed generic rubrics for typical flexibility dividends like partnership, which can then be tailored to each facility’s context.
  • Baseless judgments: by their nature, rubrics deal with multi-dimensional constructs. Thus, gathering enough data to ensure well-informed judgments is a challenge. Keeping the rubric as focused as possible will help, as will getting the right people in the room during deliberation, to draw on their tacit knowledge if needed (noting added risks of bias!).
  • Getting lost in the weeds: this can occur if the rubric has too many criteria, or if participants are not facilitated to focus on what’s most important – and minimise trivial debates.

If these pitfalls are minimised, the promise of rubrics lies in their potential to enable more:

  • Time and space for strategic dialogue amongst those who manage, oversee and fund facilities.
  • Consistent strategic direction, including in the event of staff turnover.
  • Transparent judgments and reporting about the adequacy of facility performance.

Rubrics are only ever going to be one piece of the complicated facility M&E puzzle. But used well, they might just contribute to improved facility performance and – who knows – may produce surprising evidence to inform broader debates on facility effectiveness, something which shows no sign of abating any time soon.

The post What’s missing in the facilities debate appeared first on Devpolicy Blog from the Development Policy Centre.

For details on the author, please click on the blog title immediately above, which will redirect you to the Devpolicy Blog.

Other recent articles on aid and development from devpolicy.org
The fate of leadership which aspires to leave no-one behind
Launch of the Development Studies Association of Australia
Managing the transition from aid: lessons for donors and recipients
Market systems and social protection approaches to sustained exits from poverty: can we combine the best of both?
Aid and the Pacific in the Coalition’s third term

Series on Indigenous evaluation- Connection and Community

This blog forms Part 2 of a 4 part blog series outlining the learnings and reflections of my two colleagues and I who were fortunate enough to attend the 2019 Indigenous Peoples Evaluation conference in Rorotura, New Zealand.

The conference was an inspiring and transformative experience for me, as a non-Indigenous person and evaluator. Having worked as a youth worker prior to entering into evaluation, I particularly resonated with the keynote speech given by Marcus Akuhata-Brown, who among many things, works with at-risk youth and young offenders. Marcus explored the topic of connection and its role in supporting young people and others when crisis occurs. The following is my short reflection, as a non-Indigenous person, on Marcus’s talk and it’s implications for evaluation and our society.

Stepping into the forest: Connection in community

Marcus used the analogy of a forest to explore the importance of connection in our society. He described the sensation of standing on the edge of a natural grown forest and stepping into it; feeling the sudden sensation of humidity and experiencing smells and sounds that you could not hear from the outside. This sensation can only occur as a result of the many interdependent and connected species surviving together and supporting each other as a whole. In contrast, Marcus compared this to stepping into a manufactured pine forest; with only a single species surviving neatly and independently. In the second instance you can smell a strong scent of pine but will not feel the same level of sensation that occurred in the first. Although the first forest is very complex and harder to understand when you look at the whole, you can appreciate a self-sustaining eco-system, which will survive for an incredibly long time with no human intervention.

This analogy is very powerful when reflecting on our own societies and the tendency, particularly in Western-dominated culture, to pursue objectivity, independency and scientific rationalism. Our desire to simplify, neaten things up and search for the absolute and independent truth results in a narrow understanding of communities and a separation of people from one another. Humans are complicated and communities are complex but what results is not something to shy away from;  instead, it is something to embrace and work with to better our societies as a whole.

Connection as a support system

Marcus also reflected on his work with young people who have fallen into crisis. In situations where the young person is well connected to their community or ancestors, they have a support system and somewhere to go to heal and get back on the feet. I know from my own personal experience of working with young people experiencing homelessness that in the case of a young person who does not have this level of connection, they can easily fall through the gaps in society, become isolated and lack an attachment to life. The vast majority of the young people I worked with at the homelessness crisis accommodation were there due to a lack of family and community support. When things went wrong for them they had to cope with this alone and were not ready.

Marcus therefore urged the audience and our society to, “separate yourself from what separates you from others,” and to let go of the things that don’t allow us to connect to place, such as phones, the internet and televisions etc.

Complexity & Evaluation Conference April 2019

I had the pleasure of being a co-convenor of this year’s evaluation and complexity conference along with Mark Cabaj and Kate McKegg hosted by Collaborate for Impact. The theme this year was “finding our way together”. We were particularly interested in participatory methods and Indigenous evaluation. The conference had two provocateurs – Skye, (who we are delighted has joined the Clear Horizon team) and Liz who provided questions and reflections at the end of each session from an Indigenous evaluation and adaptive leadership perspective respectively. There are lots of awesome resources here.

Zazie Tolmer from Clear Horizon, Mark Cabaj and Kate McKegg kicked off the conference with a plenary on what systems change is all about. They started with the cosmos and worked backwards. it was a rapid start into the subject matter and felt like we started where we left the conference last year. Next up was a presentation from the Kimberlie’s, Des and Christy introduced us to how a collective of Indigenous leaders – called Empowered Communities are approaching the work of systems change. They had some great resources to share.

I presented alongside Kerry Ferrance on “co-evaluation” sharing some of our latest thinking around a new take on participatory evaluation for systems change initiatives. We showcased the co-evaluation we conducted with GROW – a systems change initiative in Geelong that focuses on tackling disadvantage through mobilising local business to employ local workers. I was a bit nervous to be putting out there the idea and term co-evaluation for the first time, as I am shakily writing a book on this topic (I have some doubts as all shaky writers might understand). There was some really useful feedback, particularly that summative co-evaluation could offer an important contribution – especially when a systems change initiative has been mobilized from the community up – imposing an external evaluation on this sort of initiative can be particularly inappropriate, and here summative co-evaluation might serve as a great alternative.

Skye also worked with Nan Wehipeihana to produce a booklet on Indigenous evaluation.

Key takeaway messages and insights form the Clear Horizon team are:

·         There is a growing interest and body of knowledge on evaluating systems change that weaves together working with power, participation and complexity. 

·         The role we hold as evaluators needs further exploration and defining – our roles often expand out to change makers, sense makers and complex space holders.

·         As evaluators we want to disrupt systems and shift power too!

·         Participatory evaluation isn’t necessarily a decolonising approach. Indigenous people have their own legitimate forms of evaluation that shouldn’t be discounted and are a valuable addition to the toolkit.

·         Systems thinking a la Meadows, reminds us of the many elements that make up systems. This conference brought to light the many system ‘stocks’ that are traditionally ignored in particular Indigenous knowledge, ways of knowing, seeing and doing. Instead of being ‘capacity builders’ we need to become ‘capacity revealers’, to super charge the change effort.

·         We often feel safe within structures and guiding processes, but perhaps more work needs to be done to safeguard ethical evaluative practice.

·         Des and Christy reminded us of the importance of setting up early agreed governance arrangements and processes when working in system change efforts.

Skye will be sharing more of her reflections on the conference and the work she did with Nan in a coming blog. Watch this space for more on this topic.

Monitoring Evaluation & Learning (MEL) reflection

Our Monitoring Evaluation and Learning (MEL) course ran at the end of March (2019).  This is our most comprehensive course running over 5 days.

We had an enthusiastic group of participants from a variety of organisations including some of our newer staff.

Carina Calzoni was the lead trainer for the course with some of staff coming in to offer additional insights on areas of their expertise.

Carina reflects on the March MEL program:

“As a presenter, the main highlight was discussions and interaction with the course participants. They were all fully engaged throughout the course and were keen to learn. We had many very interesting and in-depth discussions about how and where to apply MEL in different settings and organisational contexts. It was also great having several Clear Horizon staff (Kaisha, Samiha, Caitlin and Ed) presenting different parts of the course. This really helped to maintain the momentum over the five days.”

If you want to gain extensive training on Monitoring and Evaluation our next MEL course is running in Melbourne from the 21st of October to the 25th of October.

GovComms podcast – Social problems & digital solutions

Jen Riley our Digital Transformation lead recently spoke with David Pembroke on a GovComms podcast.  The topic of discussion was Social problems & digital solutions.

Areas discussed in this episode:

  • The importance of simplicity in communication
  • Preparing an evaluation checklist
  • The qualities of a good evaluator
  • The shifting focus from output to outcomes
  • The impact of technology on change measurement
  • Making data work for us (and not vice versa)
  • Developing a toolkit for social change
  • Avoiding data overwhelm
  • Resources for those who want to learn more

Click here to listen to Jen’s podcast Social problems & digital solutions.