A guideline-based approach to support the assessment of students’ ability to apply object-oriented concepts in source code

There are many approaches in assessing students’ ability in object-oriented (OO) programming, but little is known on how to assess their ability in applying OO fundamental concepts in their written source codes. One major problem with programming assessment relates to variation in marks given by dif...

Full description

Saved in:
Bibliographic Details
Main Authors: Norazlina Khamis, Norhayati Daut
Format: Article
Language:English
English
Published: UTM Press 2015
Subjects:
Online Access:https://eprints.ums.edu.my/id/eprint/18642/1/ABSTRACT.pdf
https://eprints.ums.edu.my/id/eprint/18642/2/FULL%20TEXT.pdf
https://eprints.ums.edu.my/id/eprint/18642/
http://www.jurnalteknologi.utm.my/index.php/jurnalteknologi/article/view/4919
https://doi.org/10.11113/jt.v78.4919
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:There are many approaches in assessing students’ ability in object-oriented (OO) programming, but little is known on how to assess their ability in applying OO fundamental concepts in their written source codes. One major problem with programming assessment relates to variation in marks given by different assessors. Often, the grades given also does not gauge whether students know how to apply OO approaches. Thus, a new assessment approach is needed to fill these gap. The objective of this study is to construct and validate through expert consensus, a set of evaluation criteria for fundamental OO concepts together with the guidelines called GuideSCoRE, to help instructors assess students’ ability in applying OO concepts in their program source code. The evaluation criteria are derived from fundamental OO concepts found in Malaysian OO programming syllabuses and validated by a three-round Delphi approach. The proposed evaluation criteria were mapped with related OO design heuristics and OO design principles. A guideline (GuideSCoRE), constructed based on the Goal-Questions-Metrics approach together with the evaluation criteria is used by instructors when assessing students’ source codes. An inter-rater reliability analysis among six instructors found moderate agreement on assessment scores (κ values of mainly between 0.421 and 0.575) indicating that whilst the guidelines do not completely eliminate variations between raters, it help reduce their occurrences.