Adam Hutsby shares his research findings and observations on the value of self and peer assessment


Given the ever-increasing focus within education on assessment, this seemed a logical focus for my dissertation, as part of my MEd. My research title focused on “the effectiveness of different formative assessment techniques on Year 12 and Year 13 Economics students”, primarily looking at self and peer assessment as well as comment-only marking. As claimed by Faulk (2008), only a limited number of studies have focused on formative assessment in economics education.

My research was the culmination of two years’ worth of research on five target classes within the Economics department at a co-educational boarding school. The target classes included both IB and A-Level cohorts. Quantitative data was obtained from 56 Sixth-form economics students from three surveys, staff surveys involving 7 departmental colleagues, 10 teachers from other subject disciplines and 5 economics teachers in other schools across England. Economics lessons were observed to provide further quantitative data, namely relating to recording the incidence of differing questioning techniques. The survey design was based on similar surveys used by Falchikov (1986), and Stefani (1992), and whilst these were based on university undergraduates of Biology, this was the closest research I could find, both in terms of the age group, and also in terms of being specific to just one subject.  They also had a similar sample size. Qualitative data was gathered from two focus groups each involving four students, as well as from open-ended comments from lesson observations.


My target groups appeared much more comfortable using self-and peer-assessment techniques than my non-target classes. This was evidenced by them not complaining when asked to self or peer-assess, whereas some of my other groups were still of the opinion that all essays should be marked by teachers. Moreover, as each class is taught by two teachers, my colleagues made reference to our shared target-classes being much more relaxed and happy to self-and peer-assess than their other teaching groups too.

Figure 1, shows student perceptions of self-assessment; it is clear that my research findings were largely compatible with those of Falchikov (1986), and Stefani (1992). This research concluded self-assessment is viewed as being beneficial for student learning, with an average of 79.7% suggesting it was beneficial.

Figure  2, shows student perceptions of peer-assessment, and again my research findings largely complemented the studies of Falchikov and Stefani. This time, 85% of my students suggested peer-assessment was beneficial, which was very similar to the 82% obtained by Stefani, although higher than Falchikov’s 65%. One area in which my students’ perceptions differed to the other researchers was concerned with the difficulty, as on average my students reported that it was less difficult than the other studies. My focus group students suggested this was because they found self-assessment more difficult, which Figure 2 also corroborates, showing that more of my students found it ‘hard’ compared to the work of Stefani.

I would suggest this is because of the order in which I introduced self-and peer-assessment. When first implementing self-and peer-assessment with my target classes, I followed the advice of Sadler (1989), and Wragg (2001), believing self-assessment is a pre-requisite for successful peer-assessment, as students need to be able to provide sensitive and honest feedback to themselves before they can do this with peers. Students’ initial attempts at self-assessment, with the exception of one AS-Level class were unsuccessful because they lacked the necessary skills and experience of judging their work against mark schemes, but perhaps more importantly, were poor at setting realistic targets to remedy their weaknesses. Many targets were either too generic, such as “increase my score” or not specific enough, such as “improve analysis” without giving thought to how this would be done. Gradually, however, with sustained practice they began to feel more comfortable in assessing their own work. This could therefore account for why they thought self-assessment was ‘harder’ than peer-assessment, as by the time they progressed to peer-assessment they had relatively good self-assessment abilities.

Figure 1: Student Perceptions of Self Assessment From the Falchikov (1986), Stefani (1992) and Hutsby (2013) Surveys

Self assessment makes you: (tick all which apply, and scores here are given as %)

Column A Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Dependent 12 2 7 7
Not think more 0 0 0 0
Not learn any more 3 0 0 1
Lack confidence 27 4 5 12
Uncritical 3 2 2 2.3
Unstructured 3 0 2 1.7


Column B Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Independent 41 76 73 63.3
Think more 91 100 91 94
Learn more 59 85 68 70.7
Gain confidence 41 62 55 52.7
Critical 94 95 96 95
Structured 79 82 80 80.3


Self assessment is: (tick all which apply, and scores here are given as %)

Column A Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Time consuming 62 100 89 83.7
Not enjoyable 32 54 48 44.7
Hard 91 76 77 81.3
Not challenging 6 2 4 4
Hot helpful 6 3 11 6.7
Not beneficial 9 3 14 8.7


Column B Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Time saving 9 0 2 3.7
Enjoyable 24 19 23 22
Easy 3 3 4 3.3
Challenging 82 97 86 83.3
Helpful 71 90 82 81
Beneficial 65 90 84 79.7

From Figure 1 and Figure 2, it is clear my students found self-assessment to be challenging (86%) and made them more critical (96%). The fact they believed self-assessment helped them learn more and also think more, demonstrates the learning gains yielded by self-assessment. In terms of student enjoyment, from informal student discussions and the focus groups, I would suggest this was because students liked reading each other’s responses as they could see how to better structure their own work.

Figure 2:  Student Perceptions of Peer Assessment From the Falchikov (1986), Stefani (1992) and Hutsby (2013) Surveys


Peer assessment makes you: (tick all which apply, and scores here are given as %)

Column A Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Dependent 24 11 16 17
Not think more 0 0 0 0
Not learn any more 3 6 2 3.7
Lack confidence 21 11 21 17.7
Uncritical 3 0 0 0
Unstructured 0 0 0 0


Column B Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Independent 35 68 64 55.7
Think more 82 96 98 92
Learn more 62 80 93 78.3
Gain confidence 35 56 46 45.7
Critical 77 93 89 86.3
Structured 53 72 73 66


Peer assessment is: (tick all which apply, and scores here are given as %)

Column A Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Time consuming 59 91 88 79.3
Not enjoyable 29 35 30 31.3
Hard 71 78 68 72.3
Not challenging 9 2 4 5
Hot helpful 0 2 0 0.7
Not beneficial 0 2 0 0.7


Column B Falchikov study (1986) Stefani study (1992) Hutsby study (2013) Average
Time saving 3 6 4 4,3
Enjoyable 24 35 39 32.7
Easy 9 9 7 8.3
Challenging 79 94 82 85
Helpful 62 67 84 71
Beneficial 65 82 86 77.7


A major side effect of summative assessment is that it has led to the development of a culture within schools of comparing marks. Whilst this harms both high-and low-ability pupils, at its most extreme, can lead to lower-ability pupils actually answering exams randomly as they expect to fail anyway, and this actually defeats the whole purpose of the assessment process in the first place (Paris et al, 1991). This culture is now very embedded across the country, making it incredibly difficult to change, and I believe this culture will only end if a sizable number of teachers switch to comment-only marking, as comparing grades is almost inevitable for students. In my particular school, this is compounded further due to the £36,000/year school fees, as parents and students expect work to be graded.

Comment only marking was a completely new intervention implemented on my target classes, and initially it was met with resistance. The focus group students suggested they were initially very “anxious” towards comment-only marking, as they liked marks, as this enabled them to view progress more effectively. However, out of the eight students in the focus groups, seven said they believed they had “benefited” from the process. This was especially the case with the lower-ability pupils.

A major barrier to implementing a greater focus on formative assessment according to the literature is based on the ‘cognitive conflict’ between teachers realising the benefits of comment-only marking, but school-wide marking policies which often require teachers to grade or score all work. For instance, Gardner (2006, p15) discusses “potential conflicts with school policy” surrounding comment-only marking. However, my HoD was very happy for me to trial comment-only marking with my target classes and so I didn’t encounter the same barrier many face. However, I continued to record their marks in my mark book in order to track student progress over time, although these marks were never recorded on their work.

I realised at a very early stage that whilst I have always provided detailed comments on student work, I never really provided enough lesson time for students to read comments and reflect upon them. Therefore, although I appreciated that students rarely read the comments to the extent I would have liked them to, my own teaching did not encourage this. From my surveys and focus groups, whilst the brighter students were sufficiently conscientious and read my comments outside lesson time, the weaker students did not do this, yet arguably, it is the weaker students by definition, who would benefit the most from reading comments. To reinforce this potential learning gain further, lesson time is now made available for working on improvements. This is important as it provides students with an opportunity to discuss any questions from the comments, but also demonstrates to students that I value this learning activity sufficiently for it to warrant lesson time.

From the focus groups, one high-achieving student thought her work had become poorer as she associated more detailed feedback with inferior work.  As a high-achieving student she was used to short comments, as opposed to detailed feedback. This supports the work of Wiliam (2011), who also found that high-achieving students perceived more feedback as a sign of degrading quality of their work. Wiliam (2011, p.129) suggested a solution to this was to opt for what he terms the “three questions approach” which involves the teacher writing thee questions on student work which the students need to reflect on. The students then answer these three questions at the bottom of their piece of work during lesson time to emphasise that the teacher values this activity highly. The main advantage of this solution is that each student has three questions to consider irrespective of ability and so all students have the same amount of work to do, and these questions posed by the teacher provide an excellent form of differentiation to help accelerate improved student performance.


Even though Black & Wiliam (1998) suggest benefits from peer-assessment are inconclusive, there are very important indirect benefits, which need to be taken into consideration. Whilst students are peer-assessing the work of others, peer-assessment frees up teacher time to be able to develop appropriate strategies for any students who may benefit from such interventions. These learning gains that may arise from peer-assessment would not have been possible had the extra teacher time not been made available because of peer-assessment.

Self-and peer-assessment were popular with my students, especially peer-assessment, as they enjoyed opportunities to discuss their work in student-friendly language with peers. Comment-only marking was initially met with mixed feelings, mainly due to its unfamiliarity given the culture of the school, as students expected a mark to accompany the comments. For the effects of comment-only marking to be more impressive it will require a more widespread approach by teachers, (Black et al, 2003). As the schools teaching and learning mentor commented in his survey response “the culture of the school is such that students expect to be spoon-fed, and formative assessment needs a whole school change in emphasis rather than a piecemeal approach”.

Some strategies worked better than others, such as peer-assessment compared to the use of writing frames. Furthermore, some strategies yielded results sooner than others, such as increased wait-time resulting in fewer incorrect student answers, whereas others took longer for students to adjust to, such as comment-only marking. Whilst self and peer assessment take time to develop and embed as part of teaching and learning within Economics education, the benefits to departments who persevere will be better student performance not only in tracking tests but also in terms of the final summative examination.

It is clear from the literature and my own research that formative assessment is not an educational panacea, and nor is it a quick-fix solution, requiring a whole-school focus if the full benefits are to be realised. However, as much of the literature reports, introducing formative techniques is time consuming and this acts as a major barrier to its implementation, (Black and Wiliam, 1998; and Clarke, 2005).  As a result, despite its virtues, it is often ignored as teachers prioritise other activities, which they absolutely have to adhere to. As a Humanities teacher at my school commented in the survey “whilst I am convinced of the benefits of formative assessment techniques, I feel there is a pressure on time”.

One of the major foundations of good assessment practice is that assessment criteria are clearly communicated to students, and this applies irrespective of the form of assessment selected. Overall, the formative techniques led to highly engaged pupils in lessons who were not afraid to make mistakes. Moreover, as Economics teachers, we need to be aware of the inter-relationships between the form of assessment, levels of student motivation, and a general interest in learning, for the love of learning, rather than students viewing learning as a means to an end.

Perhaps the main implication arising from my study was the important learning gains that can be achieved through student explanations to their peers. Whilst students are comfortable interrupting each other, they are less comfortable interrupting the teacher. However, if teachers allow students to interrupt the teaching process to resolve any queries or misconceptions, this can really enhance learning as it can avoid the development of deeper misconceptions, and crucially, students do not ‘forget’ the question they were going to ask by waiting until the end of the teacher explanation. Clearly, not all teachers will be comfortable, as some may view this as an erosion of the teacher authority, however, my study demonstrated that if teachers allow this to happen it can ensure that students remain in their ‘growth zone’. As I only teach sixth form classes, usually with around 12-16 students in each class, such interruptions without behaviour management issues are feasible, however, I question how generalisable this would be with younger students and larger class sizes.

Adam Hutsby is Deputy Head of Sixth Form at Malvern College



Black, P. And Wiliam, D. (1998). Inside the Black Box: Raising Standards Through Classroom Assessment. Assessment in Education, Vol. 5 (1) pp. 12-33

Black, P. Harrison, C. Lee, C. Marshall, B and Wiliam, D. (2003). Assessment for Learning: Putting it into Practice. Buckingham. Open University Press.

Clarke, S. (2005). Formative Assessment in the Secondary Classroom. London. Hodder Murray.

Falchikov, N. (1986). Product Comparisons and Process Benefits of Collaborative Peer Group and Self Assessments. Assessment and Evaluation in Higher Education. Vol 11 (2). pp.146-166

Faulk, D. (2008). Formative and Summative Assessment in Economics Principles Courses: Are Applied Exercises Effective? (Contribution to the Annual Meeting of the American Economics Association. Available at:  

Gardner, J. (2006) (ed). Assessment and Learning. London. SAGE Publishers.

Sadler, R. (1989). Formative Assessment and the  Design of Instructional Systems. Instructional Science. Vol. 18. pp.119-144

Stefani, L. (1992) Comparison of Collaborative Self, Peer and Tutor Assessment in a Biochemistry Practical. Biochemical Education. Vol 20 (3). pp.148-151

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington. Solution Tree Press.

Wragg E.C (2001). Assessment and Learning in the Secondary School. London. Routledge.