In this post our Founding Director, Dr Jess Dart, shares some reflections from the recent Complexity and Evaluation in an Uncertain World Conference in Melbourne.

A big focus of the Complexity and Evaluation in an Uncertain World Conference was on Developmental EvaluationMichael Quinn Patton Skyped in for the morning with his amazing words of wisdom. I particularly loved his description of why we need evaluators. Patton used the example of having bad-breath, he explained that it is hard to assess if you have bad-breath – blowing in your hand and smelling it does not work – and more often than not you need to get a second opinion.

Hope is the main thing I’ll take away from the conference. There is hope that many extremely interesting roles exist for evaluators who can be agile, and co-evolve with programs. A highlight for me was the afternoon session by Mark Cabaj who talked enthusiastically about the six different niches that Developmental Evaluation dwells in. They were:

Pre-formative for programs that are yet to be developed
Model replication for programs that are being transferred from one context to another (for example scaling up and out)
On-going development for programs that are expected to evolve over time
Cross-Scale initiatives for programs that are attempting to achieve systematic change (for example innovation strategies)
Crisis for programs that work in disaster settings
Patch evaluations for programs operating on multiple tracks simultaneously

Cabaj’s examples were excellent and inspiring. They finally answered a question for me – is Developmental Evaluation is really a new thing? My previous conception was that it was similar but slightly different to participatory and or embedded evaluation models that focus on learning and supporting the program team – Clear Horizon’s bread and butter! Now I get that Developmental Evaluation’s key point of difference is more about the stage of the program; programs are not yet formed when developmental evaluators engage. So for me, it is more akin to program design or action research.

The conference confirmed that as evaluators we must be agile and bring a repertoire of tools which yield to the pace of the design if we want to serve the development of programs. While there are opportunities for us to hone new skills – for example rapid ideas generation and prototyping – it was also clear that evaluators already have some of the necessary skills. We bring critical (or evaluative) thinking and methodological rigour to the design process. At Clear Horizon we have used needs assessments and situation analysis to inform collaborative options and theories of change for new programs. This makes for exciting work, I am smitten.

What tools have you found useful to contribute to program design? To join the conversation please tweet your thoughts and tag us (@ClearHorizonAU).