Abstract
Novice programmers make a lot of programming errors as they strive to become experts. This is a known fact to teaching faculty in introductory programming courses. The errors play a major role in both formative and summative assessment of the students. The Computer Science research of today trends towards focusing on automatic assessment of program, becoming more remote from the student who wrote the program. In an attempt to create a better understanding of the novice program- mers and the errors they make, this thesis takes a look at the errors novice programmers produce in programming assignments and the misunder- standings that may have caused them. We use a qualitative approach to analyze assignments, interviews and observations with 23 students of a second semester course in object- oriented programming, using Java, at the University of Oslo. 33 solutions to mandatory assignments were analyzed to identify errors. 14 students were interviewed about their solutions to identify the misunderstandings that caused the errors we identified. Nine students participated in think- aloud observations to add further insights into how students of the targeted course approach problem solving. Finally the misunderstandings are analyzed to review the feasibility of using misunderstandings to guide formative and summative assessment. We have identified multiple student errors in both result and design. Multiple errors and misunderstandings revolved around generic class parameters, a topic that not much existing research covers. Our findings suggest that misunderstandings considering the princi- ples of object-oriented design lead to further misunderstandings about the important aspects that must be considered to write good programs. We did not find a feasible way to use pre-identified misunderstandings as a sole metric for assessment but believe our research may be used to help create a statistical analysis of the frequency of misunderstandings causing errors. With a larger amount of participants a study could supply examiners with an additional tool to help them gain insight into the understanding of the students they assess.