Evaluation Collective

Evaluation should be collective and transformative action

Blog post by Dr Liz Austen

The Evaluation Collective defines evaluation as an approach which helps to understand and explore what works and doesn’t work in a given context and is of value to stakeholders.

The aim of evaluation is actionable evidence-informed learning and continuous improvement of process and impact. We stand against the performative, task-focused, tick box evaluation machine, and define evaluation as a collective responsibility within organisations, focusing attention and effort on transformative educational outcomes. (Evaluation Manifesto 2022)

The Evaluation Collective was established in January 2022, when like-minded folk gathered to share their evaluation reflections (as well as their evaluation frustrations, challenges and triumphs). We all had different job titles, worked in different institutions and with differing strategic drivers. Our simmering collective angst centred on how we could develop evaluation practices in the ‘participation’ space, primarily to enhance student success. Whilst we might claim we were ahead of the curve, we didn’t anticipate the momentum that would be generated around evaluation in the months that followed, triggered by the introduction of a new Director of Access and Participation in the Office for Students. This gave us a hook and the drive to build the Evaluation Collective into a network for the benefit of the HE sector.

Rejecting performative evaluation

The embedding of evaluation within learning and teaching activities intended to enhance student success has a mixed record, largely because of the way they have traditionally been positioned. Although challenging, I believe evaluation in these spaces can be empowering and generate productive positive change if we collectively move against the performative, task-focused, tick box evaluation machine. In various spaces, I have challenged practitioners to take ownership of evidence generation and to explore and understand their contexts, in and on their own terms. Their practices are routinely evaluated, often through student evaluations of teaching or national survey instruments, which can be challenged on the ground of poor response rates and methodological flaws (for example, Duna Sabri’s suggestion that these kinds of surveys can act as a kind of ‘fact-totem’, assuming a power that exceeds their significance and validity). These are examples of performative evaluations, administered summatively, where no-one can quite make sense of what the evidence means, how to respond or what it implies for practice development.

Photo by Ivan Bertolazzi on Pexels.com

I have previously provided an overview of some of the principles of effective evaluation in teaching, learning and assessment spaces (Austen 2021), and many are relevant to this rejection of performative evaluation. I have described, for example, the importance of developing an evaluative culture with strategic oversight, or the need to provide resources and capacity-building of evaluative skills and knowledge. However, I consider working alongside students to evaluate teaching, learning and assessment practices as particularly crucial to transformative and not-performative evaluation.

How does change happen?

Across the higher education sector, we are quickly realising that a Theory of Change (ToC) can help us design effective interventions and evaluation plans, but I believe this is currently an underused tool in learning, teaching and assessment spaces. A preoccupation with ‘doing something’ – an activity led approach, often with the best of intentions – can often overlook a clear aim or rationale for change (an outcome/theory led approach). This often means jumping into the middle of intervention development, without paying sufficient attention to the hows, whys and what-fors. A Theory of Change is a useful framework for working out where to embed meaningful student involvement in the exploration of change (hopefully it’s somewhere at the beginning) and for ensuring actions align to student outcomes. It is worth exploring why ToC might be underused in this space and how we can ignite interest with learning and teaching practitioners.

An evaluative cycle

In learning and teaching spaces, change-based activity informed by student involvement is often framed as ‘enhancement’ or ‘improvement’ and intertwined with quality review cycles. Whilst an evaluators’ long term goal might be a comprehensive Theory of Change, I suggest that a simpler, and more familiar model might be better for re-igniting this conversation. This model (above) for working alongside students in an evaluative cycle, positions student voices as influential in the key points of development. Firstly, in researching the rationale for change, developing understanding, or challenging assumptions of how might change might occur. This evidence-informed learning can then be embedded through co-design of new or adapted activities. Student voices can also provide evidence of impact within and beyond course learning outcomes through the application of appropriate evaluation methods (that’s a whole other discussion). Embedding the gathering of impact evidence within existing interactions with students (such as during teaching sessions) is crucial, to reduce the evaluative burden on students, provide genuine opportunities for dialogue and increase the quality of the data we gather.

How will you know whether an intervention or initiative is working? Your students will tell you! However, student voices are only one source of evidence and sometimes we can rely too heavily on or give too much weight to these perspectives. It is important to utilise a range of evaluative data sources to reflect on the teaching, learning and assessment design and impact. These can be sketched out in an accompanying evaluation plan, noting that this can be messy, and messy is ok.

Evaluation case studies

There is strength in embedded practitioner-led evaluation, and this aligns with notions of action research. Commissioning independent evaluation can also be useful if the context requires an alternative lens or can provide capacity and resource. We have run a variety of projects at Sheffield Hallam which are evaluative in nature and have meaningful student involvement. Some are practitioner-led, designed or adopted by Course Leaders, and others are led or facilitated by independent evaluators from the Student Engagement, Evaluation and Research (STEER) team.

In 2022, we completed a piece of independently commissioned evaluation to explore effective processes and impact outcomes of a PG professional development course. The focus was on exploring impact beyond learning outcomes – what else did students experience during and after the course? This project was based on participatory evaluation methods. An Advisory Group of 10 current students and graduates co-created the Theory of Change for this evaluation, designed the methods of data gathering (peer-led interviews) and reflected on emerging findings.

We have also conducted an evaluation which explored course enhancement from an appreciative slant. This approach is often useful when initiatives (an undergraduate course in this example) have been running for some time and are experiencing some difficulty (e.g. declining or fluctuating student satisfaction scores). The appreciative slant avoids a deficit lens and asks ‘what is working well?’ and ‘what would the perfect course look like?’ before building on this evidence to construct outcomes, design change and evaluate impact. In this example, facilitated conversations with staff and students influenced the design of potentially transformative change.

We are also currently evaluating the impact of simulated placements and how they inform the experience and professional development of nursing degree students. As with the PG professional development course evaluation, we are taking a student and practitioner led approach, asking two advisory groups (student and delivery staff) to steer the research questions and design. We are also experimenting with a ‘most significant change’ approach, to gather grounded and unmediated student experience data and understand how they see and experience the simulated placement without ‘leading’ their reflections.

Finally, some detail of the infrastructure of student involvement in evaluation at Hallam. We have a Student Researcher Scheme which we run centrally with recruitment support from Campus Jobs Employment Centre. The STEER team train students each year to work alongside staff on learning, teaching and assessment enhancement projects and evaluation projects. We build community by holding a monthly meeting with student researchers to discuss how things are going. All activity (attendance at training, monthly meetings, project work) is paid at an hourly rate.

Photo by Pixabay on Pexels.com

How collective is the Evaluation Collective?

The Evaluation Collective is building networks across the UK HE sector through engagement in our events, mailing list and blogs.  We want to encourage others to share thoughts, ideas, and case studies of practice so we can build collective knowledge and learning. If you are testing an approach to evaluation in any space that might be of interest to others, please do consider sharing your thinking and findings via this blog.

We have not yet reached out as an Evaluation Collective to our student communities, and there is much more we could learn from working alongside them to design interventions and evaluate process and impact. There is a risk that the desire to significantly increase evaluation activity produces more fuel for the tick box evaluation machine. Ensuring there is a collective commitment to embed meaningful student participation in discussions about the rationale for change, as well as about how best to gather impact evidence, may counter this and ensure we progress towards this first ask of our Evaluation Manifesto.

Liz Austen

February 2023

Dr Liz Austen is Head of Evaluation and Research (Student Experience, Teaching and Learning) at Sheffield Hallam University. Her role includes external evaluation consultancy across the higher education sector and research and evaluation for her institution which focuses on improving student experiences and outcomes. 

Published by

Leave a comment