Evaluative practices

Evaluation is undergoing a shift in how it is done and what is evaluated. It is moving beyond a summative process of measuring the project towards a learning method to
produce ongoing innovation.


Aligning design and evaluation to generate new ways of knowing

Evaluation tends to sit in the background of change projects, coming to the surface when evidence is needed to continue the project, pivot in another direction; or call it as failing to achieve the project’s expectations.

Conventionally, it has also operated on a transactional basis – measuring the return on investment made by a funding organisation or corporate of government sponsor.

Recent developments, which we will explore here, question both the idea of measuring as well as what is being measured and what those measurements are based on. These questions indicate fundamental shifts in the nature of evaluation.

Interestingly, the questions align with what we think of as designerly approaches to change, namely that change is iterative rather than fixed; the process continually addresses the impact of change on those involved directly and indirectly; and that the evaluative process itself is a learning experience, continually reflecting on the experience and what is working, not working or surprising us.

It can be useful to see this as extending evaluation instead of replacing the conventional form with another one. In other words, the established and transactional form of ROI (return on investment) is still required. Instead, it is enhanced by bringing in new considerations and alternative knowledge systems.

I’ve extracted ideas from two panel sessions (see below) that point to the questions about what to evaluate, and how to evaluate  the emergent practices coming from these explorations.

As Penny Hagen (see below) notes, evaluation becomes the learning dimension of change, reviewing and reflecting as we go, making agile moves to improve and discover in a cyclical rather than linear manner and importantly, co-designing how we learn from the process.

In your development of an implementation model, you are asked to embed evaluation into the DNA of the model. That is, to see it as a positive value that allows you to gain a clear view on how the process is going and who it is impacting. In this way, evaluation becomes a 'driver' of change rather than a detached view of the change-experience.

As your project is speculative it is not possible to test your assumptions, in the field. However, it is useful to challenge what you think will work using design research methods. For example, the personas you generate will include people who have alternative cultural knowledge systems. What mechanisms do you have in your process that enables these alternative ways of knowing to drive change? Who do you include in your team or advisors to ensure that there is a deep knowledge sharing, not a tokenistic nod to cultural diversity? What are the ‘charters’ you refer to in the formation of your model? For example, the UN Declaration of Human Rights, the UN Sustainable Development Goals, or the Australian Indigenous Design Charter. Recent Government reports are also valuable, such as the Women’s Workforce Participation and Respect@Work.

Sketching an evaluation practice

A good place to start is to sketch out where and how evaluation might work in your overarching implementation model. Map out the skeleton of the implementation model, its stages as you see them, the people/personas, the milestone points and alerts (where you think things could go awry or where there is potential for innovation).

Overlay this with where and what, how and with whom you might evaluate. Do this before listening to the talks below, then return afterwards and consider the new moves you might make using an enhanced evaluation practice.

Once you have a rough sketch of how the implementation model and your evaluation process intersect, you can begin to define the critical moments and objectives for your model.

Begin with a Theory of Change

A Theory of Change: “explains how activities are understood to produce a series of results that contribute to achieving the final intended impacts. It can be developed for any level of intervention – an event, a project, a programme, a policy, a strategy or an organization. A theory of change can be developed for an intervention:
• where objectives and activities can be identified and tightly planned beforehand, or
• that changes and adapts in response to emerging issues and to decisions made by partners and other stakeholders.” (Rodgers, P, Theory of Change, Methodological Briefs Impact Evaluation No. 2, UNICEF, https://www.betterevaluation.org/sites/default/files/Theory_of_Change_ENG.pdf; )

In other words, a ToC (sometime referred to as a Theory of Transformation) is a way to note down the collective objectives for your project and the means to achieve them. As a theory rather than a plan, it is open to being tested and changed as required. There are many different ways to represent a ToC diagrammatically. A simple map or chart is all that is required for this project.

In Evaluation For Impact, TACSI (The Australian Centre for Social Innovation) uses this diagram to show the alignment between design and developmental and capability evaluative practices (https://tacsi.org.au/journal/design-and-evaluation-for-impact/):

Alternative views of evaluative practices

Romlie Mokak is a Commissioner, Australian Productivity Commission. He is a Djugun man and a member of the Yawuru people, appointed in 2018 as the first  Indigenous Policy Evaluation Commissioner. In his role he has given prominence to indigenous forms of evaluation to ascertain the effectiveness of programs for indigenous development, as well as how it might play out in arenas beyond an indigenous focus.

He is joined in a seminar by Craig Ritchie, a Dhunghutti man from the Biripi nations, the Chief Executive Officer at the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS), Adjunct Professor Muriel Bamblett (Victorian Aboriginal Child Care Agency), Dr Ganesh Nana (New Zealand Productivity Commission) and Professor Gemma Carey (Centre for Social Impact) on evaluation from an indigenous perspective
.
While there is an acceptance for the need of conventional measures to track the health of indigenous communities, they propose broader, more holistic ways to shift towards a positive evaluation approach. This is achieved by injecting indigenous values into the process.

Craig Ritchie outlines, first, the assumptions made about the underlying belief in  colonising power structures that the indigenous population would be eliminated.

An alternative narrative starts to appear when the perspectives of the ‘excluded’ are incorporated into the overarching account of how humans live in a society.

Together, the panel address the idea of measurement and what is being measured.

The full panel session, Impact 2021, is linked in the resources below. This provides a nuanced discussion on how changing the evaluation process opens the potential to see and grow new frameworks.

Niho Taniwha

Penny Hagen and Sophia Beaton are part of a movement in Aotearoa New Zealand which is becoming known as whānau-led design (pronounced far-nau]. Whānau is Maori for family and community and the relationships and responsibilities encompassed. In their work at The Southern Initiative the teams worked directly with Maori to find an appropriate form of evaluative practice that starts with the whānau values.  

Here, Penny Hagen talks about the emergence of this new type of evaluative approach, Niho Taniwha, using the language of prototypes, values and place. You can find the toolset here: https://www.aucklandco-lab.nz/resources-summary/niho-taniwha. This is an example of the innovation in the implementation process that can influence the future of the integration of evaluation and design. Naturally, this is beyond the scope of this course. Nevertheless, this method to embed evaluation as a driver in the change process, points to the thinking required.

Penny identifies three points of interest in this work: outcomes for whānau; systems changes; and strategic learning. The latter is used internally, between the design teams, as well as externally with stakeholders. Niho Taniwha becomes a way for the teams to evaluate their work, reflect on the learnings and draw these learnings into the Niho Taniwha framework in order to understand them better.

As the work developed, not only did they see a need to embed evaluation, there was a realisation of the opportunity to embody the learning experience. This manifest in dance and three dimensional models as a means to access a deeper level of understanding.

Penny Hagen and Sophia Beaton describe the evolution of their own evaluation practices in this rich exploration of the multi-dimensional Niho Taniwha.

Resources