Assessment matters. Indeed, scientific progress largely depends on the extent to which assessments can provide reliable and valid measures of variables – be it well-defined and observable variables in the natural sciences or complex and unobservable variables in the social sciences (Duckworth & Yeager, 2015). With the rapid development of information and communication technologies, new potentials arise for assessing complex psychological skills and human behavior (Mayrath, Clarke-Midura, & Robinson, 2012; Shute & Rahimi, 2017). Computer-based assessments (CBAs), for example, now allow researchers to capture complex constructs, such as collaborative problem-solving and computational thinking skills, that have recently gained importance across domains and contexts (Greiff, Holt, & Funke, 2013; Grover & Pea, 2013; Scherer, 2015), and assess constructs that have been considered essential skills for decades with more innovative and perhaps more authentic item formats (e.g., mathematical, reading, and scientific literacy; OECD, 2016). Besides the core testing purposes of distinguishing between students of different knowledge, skills, and performance levels, CBAs can also be used to assess student learning – without any high-stakes consequences based on a single, final score. In this sense, CBAs are powerful tools for both assessment of learning (i.e., summative) and assessment for learning (i.e., formative assessment; Shute & Rahimi, 2017).
The potential of CBA is widely recognized, especially in the areas of educational and psychological testing (Drasgow, 2016). Even further, international large-scale assessments in education, such as the Programme for International Student Assessment (PISA), the Programme for the International Assessment of Adult Competencies (PIAAC), the Trends in International Mathematics and Science Study (TIMSS), the Progress in International Reading Literacy Study (PIRLS), and the International Computer and Information Literacy Study (ICILS), have shifted from paper-and-pencil towards CBA approaches of educationally relevant constructs. These constructs comprise not only “traditional” skills (e.g., mathematical, reading, scientific literacy) but also “new” skills that have become relevant for students in the 21st century (e.g., complex and collaborative problem solving, ICT literacy, computational thinking). The core potential of CBAs lies in the provision of novel, interactive tasks (OECD, 2013), and the possibilities to obtain information on test-taking behavior (Goldhammer, Martens, Christoph, & Lüdtke, 2016; Greiff, Wüstenberg, & Avvisati, 2015). Taking an educational measurement perspective, Zenisky and Luecht (2016) summarize the core innovations of computer-based assessment and highlight the assessment and psychometric modeling of complex constructs, the automated scoring and test assembly (Gierl, Latifi, Lai, Boulais, & De Champlain, 2014; Veldkamp, 2015), and the availability of process data to describe not only performance (for example, by the correctness of item responses) but also strategic behavior, sequences, and patterns of actions (Greiff, Niepel, Scherer, & Martin, 2016). It is the designated aim of this special issue to present both the core innovations of CBAs in various domains and contexts and the challenges associated with them.
This item's license is: Attribution-NonCommercial-NoDerivatives 4.0 International