Evaluating your activities and expenditure
Evaluation is part of the strategic, evidence-led approach that we want to see in access agreements.
Access agreement expenditure is substantial, so decisions on this expenditure should be based on sound evidence about how effective it is, to ensure you focus investment on activities and financial support that will have the greatest impact. This evidence might come from your evaluation of your own work, or from other institutions’ effective practice, or a combination of both.
To achieve this, it is important that institutions evaluate their activities and programmes and share this evidence with others, so that you can identify what works well, and what needs improving.
Your access agreement must describe how you intend to monitor and evaluate the activity and financial support set out therein and your progress towards your targets.
Why we ask about your evaluation
There has been substantial investment into widening participation work and we want to ensure that it is used to best effect, both within individual institutions and across the sector. Better evaluation of which approaches to improving access, student success and progression have the most impact will improve understanding of what works best and enable effective practice to be shared across the sector.
The cost of evaluation is countable in your access agreement (more details on reporting evaluation expenditure).
Also, as the Government has emphasised, OFFA has a responsibility to encourage and support more, and better, evaluation. In the most recent Government guidance to OFFA (February 2016), the Secretary of State for Business, Innovation and Skills and the Minister of State for Universities and Science told us:
“We look to you to continue to encourage institutions to invest wisely in widening participation, basing their decisions on robust evaluation plans and evidence…. We would like you to require more from institutions in the information they provide to you about how they use evaluation and reflective practice and the expertise they draw on to help them make their investment decisions. It is, of course, up to institutions to invest their own money as they see fit, but it is in their interests to take evidence-led approaches and we would like you to take less account of investment for which there is little justification, based on evidence and the institution’s targets and performance.”
This guidance also said that bursaries “should be backed up by clear and robust evaluation plans and supporting evidence that shows that the investment is proportionate to the contribution they make towards widening participation” and that evaluation of access agreement activity “needs to measure the impact against the aim of the intervention at the point the intervention takes place and how this helps to widen participation”.
Good practice in evaluation
We acknowledge the diverse approaches to widening participation within the sector, and that there is no ‘one size fits all’ approach to evaluation.
We also understand that different institutions’ evaluation strategies will be at different stages. The information you provide within your access agreement will help us to understand more about this at a sector-wide level, and how we can offer further support to institutions in this area.
We are currently supporting a number of projects to help institutions improve their evaluation. We have recently published research into the evaluation of financial support which provides a toolkit institutions can use to evaluate their financial support provision, along with proposed standards of evaluation practice and related guidance for the evaluation of outreach for disadvantaged young people. We have also published an evaluation tool to help institutions in considering their approach to outreach for disadvantaged adult learners.
The difference between monitoring and evaluation
We are interested in the evaluation of your work rather than simply how you monitored your practice. The definition we use is from the HEFCE/Progression Trust toolkits for practitioners:
- Monitoring is the collection and analysis of data during a project and the comparison of this data against the targets and plans made for WP. Monitoring is part of project management, and helps to ensure cost effectiveness and project progress.
- Evaluation is about making an assessment of the effectiveness and impact of what has been done. Data gathered for monitoring purposes is often utilised as part of evaluations, but the aims of the two activities are different.
In our monitoring returns, we use Kirkpatrick’s evaluation model as a framework for assessing the levels of impact institutions are gaining from their evaluation work.
Build in evaluation from the start
Throughout the process of designing and implementing your activities and financial support you should be asking yourself the question: how do we understand and evaluate this? You need to build effective evaluation into your fair access plans right from the start.
You do not need to tell us in detail how this will be done, and the precise detail is for you to decide; however, you will need to demonstrate in your access agreement that you have robust evaluation plans in place.
Review and reflect
We would like to see evidence of reflective practice in your access agreements. We encourage you to review your evaluation plans alongside the development of your access agreement, so that you have a strong rationale to inform your activities and programmes to improve access, student success and progression. We would also like to hear how you have used evaluation to make improvements to individual activities, and your overall strategy.
Use all the resources of your institution
As part of your whole-institution approach, we encourage you to make use of the expertise you have within your institution, such as academics and researchers, to help ensure your monitoring and evaluation is robust. We also encourage you to share experience and best practice with other institutions through your collaborative networks. We expect you to draw on research evidence, where available.
Think broadly and long-term
A key challenge for institutions and OFFA is to find better ways to understand and measure the extent to which progress is being made on widening participation and fair access. For example, people reached by access work may not end up going to the institution that delivered it; they may go elsewhere or they may decide to do something else, and without tracking these individuals and their choices, it is difficult to evaluate the extent to which the measures are working.
So, when considering your approach to evaluation, you should consider how you could measure the long-term impact of your activities, such as:
- the improved attainment of participants
- evidence of long-term impact on attitudes and aspirations
- the tracking of participants to higher education.
We strongly encourage you to collaborate with other institutions, organisations, and researchers to design effective ways of capturing the long-term impact of your activities. Any expenditure invested in this is countable in your access agreements.
Where you undertake work that does not have a direct impact on recruitment to your own institution, we would expect you to implement a range of evaluative methods, both qualitative and quantitative, so that you may demonstrate the impact of such work. Evidence might include improved academic attainment in partner schools, or increased applications to higher education in general from such schools. Collaborative targets are one approach to evaluating whether activities have impacts beyond recruitment to a single institution.
Where you are engaged in long-term outreach and attainment raising activities, consider evaluating the impact of these activities based on learner outcomes at regular intervals during the programme – for example, at key transition points such as the end of a term or academic year – as well as evaluating the success of the overall programme.
Evaluate to better target expenditure
As part of your evaluation strategy we will be looking for increased demonstration of the effectiveness of your access agreement spend, and we will be looking to see that robust evaluation of the outcomes of your activities and interventions, particularly in key strategic areas such as school attainment, has informed your decision on where to invest your access agreement expenditure.
Evaluation in FECs
We understand that for many FECs, widening participation is often based more on internal progression than on specific activities, so you may place less emphasis on evaluation. However, we are interested in understanding the specific contribution that FECs make to WP and how you most successfully support internal progression. You could include in your access agreement examples of information or evidence that you have on the impact of your WP work, including from the perspective of internal progression.
Other appropriate evidence of the impact of your work could include, for example, progression and retention rates for groups of students who have received extra support for the transition from further to higher education, or the improvement in progression and retention rates over time after such support has been put in place. You should also include brief details of plans to put into place more extensive evaluation processes, where these aren’t currently in place.
For useful information on evaluation methods and information about Kirkpatrick’s Model, we recommend the series of widening participation toolkits drawing together knowledge from Aimhigher and the Lifelong Learning Networks.
We are currently supporting a project to improve the evaluation of outreach activity which you may find useful.