Wednesday afternoon at the #Eval17 AEA was quite extraordinary - I sat there with over 4,000 evaluators in the plenary as Kathy Newcomer, president of the AEA presented her crisp 7 challenges and 7 solutions for learning and evaluation. The last of her 7 challenges was about evaluation being blind to bias and racism – and that these are baked into policies and programs. We are not immune. She urged us to incorporate equity into our evaluations.

Then this morning our keynote was Melvin Hall, his talk 'Dialogues on Race & Class in America' was very powerful. He challenged us with the following question – is there one policy we can develop right now to address inclusion? Race and racism were built into the constitution and fabric of the United States. He quoted Cronbach’s thesis, saying that we have to look beyond regurgitating what lay people say in evaluations. We need to dig deeper and go further. Issue of race and class are hidden. We need critical analysis to unpack assumptions. He urged us that society can only benefit from learning from evaluation when our evaluation society is clear on its role. Evaluators need to be strong on the things that are important for the benefit of society. 

Later I went to a panel on ‘getting real about causality and complexity’. Sanjeev Sridharan urged us to assess the level of: a) complexity of the context; and b) degree of contestation between stakeholders before choosing an evaluation design. In complex settings he suggested we consider the idea of a ‘unbiased estimate’, instead of getting lost in attribution. Nonetheless, he reckons it is linguistically lazy to say that a program “works” – we need to say how it works (a realist approach). The final speaker challenged the first speaker and laid out some fancy Quasi-experimental designs for working in complex settings including  3-4 armed experimental designs with enhanced or staged treatments.

Then there was the charming group of European games designers, showing us how you can use games at different levels of complexity, and for testing capacity or assessing attributes. That was great fun. My last session of the day was a mind-boggling example of working across 16 collective impact clusters in Nigeria by Charles Abani. They had a very neat tool that used a blend of outcomes harvesting and strategy map to enable NGOs to map out where they were working and what they were achieving, and to re-plan and adapt together. They also used social network mapping to show major change in connections.

I also presented today on: Human Centred Design and Evaluation: and Australian perspective. Slides here. It was live-streamed! A very lovely receptive group. 

Jess Dart