Evidence-based mantra. The day kicked off with Kathryn Newcomer, President Elect of American Evaluation Association (AEA), who described the “evidence-based mantra” that has American decision-makers in its grips. Kathryn gave us a fascinating glimpse into how evidence does, and doesn’t, influence government policy in America. She painted a picture of a myriad of actors demanding and collecting evidence on impact - in particular ‘DEBIS’ – “demonstrated evidence-based intervention solutions”. Mostly with prescriptive notions of rigour around what constitutes evidence, with the Obama administration using a three tiered classification of evidence: (1) preliminary/ exploratory, (2) moderate/suggestive and (3) strong /causal.  Some even call the Obama administration the golden era of evidence! Kathryn noted that there was now some groups stressing that it’s not enough just to demonstrate if something works, but we also need to know what sort of contexts it will work in, and who it will and won’t work for, so that policies and programs can be better tailored to different contexts.

A learning-agenda for Government. Kathryn also talked about various strategies that offered some hope, and one was a learning agenda– such as asking Government portfolio managers what they need to know to implement their programs and then ensuring that evaluations provide answers to these needs. She also noted that Netherlands does a ‘learning audit’ –where they examine the extent to which government agencies are learning from the data!  

Significant Policy Improvement tool. I presented my new tool to a full room of enthusiastic evaluators. It’s a mash up of Most Significant Change and Outcomes Harvesting. I used this technique with the Australian aid program in Indonesia, where we harvested about 30 candidate changes across the aid program, documented them in a particular format and put them through a verification panel. From this process, 15 were classified as being significant. I promise to write a paper on it. Meanwhile here are the slides that I presented.

Conference wrap up. John Gargani, President of AEA gave us a hilarious overview of the conference, weaving in cat memes, a visit to Santa and somehow managing to cover main themes of the conference too. I would sum them up as: 1) the need for unity across the disparate evaluation organisations, 2) how to work respectfully and mindfully of culture, 3) the emerging place for human-centered design and how it helps us have empathy for those we are designing programs for, 4) the role of evidence in evaluation and design and 5) that what we all have in common is the desire to create positive impact. I loved the conference this year and am full of inspiration for the growing Australasian and global evaluation community!

Dr Jess Dart