Exploring The Reliability, Time Efficiency, And Fairness Of Comparative Judgement In The Admission Of Architecture Students

More Info
expand_more

Abstract

It is common in architecture education to quantify the quality of assignments into grades, often done by one or two teachers using rubrics. However, this can have several downsides. It suggests an objective preciseness that is debatable for the creative assignments in the field of architecture, and the assessment is dependent on the judgement of only one or two people. Comparative judgement (CJ) offers an alternative to rubric-based assessment by applying pairwise comparison to student assignments, resulting in a ranking instead of a grade.

We used a mixed methods approach to compare the reliability, time efficiency, and fairness of CJ in the selection of students for an undergraduate architecture programme at Delft University of Technology in the Netherlands. Teachers involved in the rubric-based approach for student selection were asked to re-assess a random selection of the assignments using CJ. Reliability and time investments for both methods were compared, and the involved assessors were asked in a focus group setting which of the two methods they perceived as more reliable and fair. Comparing rubric-based assessment to CJ is new, as previous studies have only looked at these assessment methods in isolation.

Findings indicate that CJ can be serve as a more reliable and time efficient alternative to rubric-based assessment. However, teachers still perceive rubrics as having higher reliability and fairness. Though this research is particularly relevant in the context of architecture, it contributes to wider discussions about reliable and fair assessment of creative student assignments.