Thesis
On the comparability of different programming language routes through A-level Computer Science
- Abstract:
-
Fairness in assessment is as important as validity and reliability. When question or paper choice is used in an assessment it is important that the various routes are comparable so that fairness to all students is maintained. Previous literature on question choice suggests that offering choice often leads to significant differences in exam performance by route, sometimes by several marks. This dissertation compares the five possible routes through a Computer Science A-level assessment. One paper was responded to using a choice of five different programming languages and it was possible that not all languages were equally accessible to students or equally applicable to the exam paper tasks. Test bias was evaluated using recently developed IRT-based differential test functioning procedures that were developed to assess unfairness due to demographic characteristics. These enabled a detailed test-level, sub-section, and item-level analysis at various levels of student ability. Some small differences between two routes were found but overall, the assessment behaved in a fair and equitable manner. It is likely that this was achieved through careful question design and the strict application of a common paper and generic mark scheme. It is recommended that the performance of the programming paper is monitored in future exam series.
A few anomalies observed in the data suggested the possibility that classroom experience might have had more of an impact on students’ performances than programming language choice. This suggests that there is a need for research targeted at better understanding how teachers are preparing their students for the exam, and how the students are approaching the paper.
While choice has not led to unfairness in this analysis, concerns around the use of choice in assessment remain. Further research is proposed to look at the impact of choice on exam fairness in a broader selection of subjects that do not use generic questions or mark schemes. In addition, research is suggested to better understand student and teacher attitudes to the pros and cons of offering choice in assessment.
Finally, it is suggested that as students are judged on their overall test scores, rather than their item scores, test fairness should be analysed at test level, rather than item level as is common practice. The use of differential test functioning analysis is recommended for any assessment where concerns of unfairness towards a sub-group of students exist.
Actions
Authors
- DOI:
- Type of award:
- MSc taught course
- Level of award:
- Masters
- Awarding institution:
- University of Oxford
Terms of use
- Copyright holder:
- Harrison, E
- Copyright date:
- 2020
If you are the owner of this record, you can report an update to it here: Report update to this record