Improving e-assessment through pedagogy and data ethical  commitment

9 September, 2022
Poto by Katerina Holmes at Pexels

Health restrictions have revolutionized the way students attend class, study, communicate and interact with their teacher. Assessment also shifted to a new stage.

 

Distance assessment carried out through electronic media was being explored at fully online institutions before the pandemic drove the world into lockdown. However, it wasn’t exactly a pressing issue for other educational institutions. Since then, the procedural implications and ethical dilemmas of such assessment have become a hot topic.

 

 

Electronic assessment in a nutshell

As an integral part of teaching and learning, assessment is the process of gathering, describing or quantifying information about learner performance. Its principles remain unchanged in online learning environments, the only difference being how these principles are implemented as compared to traditional learning environments (Rovai, 2000). To make sure everyone is on the same page, let’s take a look at what we mean by electronic assessment or e-assessment. The JISC’s (2007) definition recognizes that it is a broad-based term covering a range of activities in which digital technologies are used in assessment. Such activities include assessment design and delivery, marking – by computers, or humans assisted by scanners and online tools – and the processes of reporting, storing and transferring data associated with public and internal assessments.

 

Face-to-face universities had never had to take on massive online assessment, while online institutions were indeed dealing with this challenge in one way or another. Due to COVID-19, educational institutions were forced to shift to online assessment and struggled to overcome the related technological and pedagogical challenges. When consulting literature on the topic of e-assessment amidst this crisis, one thing becomes clear: faculty and students have to collaborate to provide a response that integrates methodological and technological decisions, while ensuring fairness, legal certainty and transparency for all internal and external stakeholders (García-Peñalvo et al., 2020).

 

 

Data as a source of qualitative insights

When using e-assessment formatively, information and communication technologies (ICT) support teachers and learners as they iteratively gather and analyse information about the learning process. This is done to assess the results in relation to the prior achievement and attainment of intended and unintended learning outcomes (Nikolova, 2012). In this regard, e-assessment has developed significantly since the focus turned to learning analytics. Palmiero and Cecconi (2019) hone in on the data produced in digital learning environments, finding that computer-based testing provides information that could not be retrieved if the tests were done on paper. The so-called log files show different cognitive styles and approaches to tasks. The focus thus shifts from the final outcomes to the process that determines these outcomes. From this perspective, the use of information, called “process data literature”, sets the boundary between formative and summative assessment and gives rise to a new form of e-assessment.

 

 

Students’ perception 

Since the pandemic broke out, digital environments have become the most widely used for study and assessment. The effectiveness of integrating technology and assessment seems to bring students closer to the assessment process (which is often perceived as hostile) thanks to their familiarity with digital tools (Nirchi, 2021). Ranieri and Nardi (2018) explore the potential and limitations of computer-based testing (CBT) as compared to traditional paper-based testing (PBT) in a study at the University of Florence. Their aim was to verify whether and to what extent an electronic mode of assessment can become a suitable alternative to PBT, allowing the assessment process to be managed more efficiently, especially in large higher education classes. Three hundred and seventy-two participants who underwent CBT also answered a questionnaire on their perceptions, preferences and level of satisfaction. The results show that the students responded very positively to the digital system, especially the possibility of receiving immediate feedback

 

Nevertheless, some critical issues have emerged relating to on-screen reading. Some students worry about not being able to read the whole document because of the need to scroll up and down, which may be a barrier when it comes to getting a good view of the test. The careful design of testing tools can prevent this difficulty. In terms of emotional well-being, some students have suffered panic attacks during online exams when they were unable to manage problems caused by their internet connection or technology in general. In Italy, many newspapers reported several cases in which school and university students could not be assessed because panic hindered their performance (Palma, 2020). 

 

 

Is online cheating a problem we can prevent? 

Due to the increase in remote learning, colleges and universities have invested in ways to prevent assessment misconduct. To curb deceptive academic practices, some have implemented means such as webcam watching, biometric patterns and third-party proctoring services, which can be perceived as invasive. We will be reviewing this in the next point of the article, but first we will review two research-based methodologies to prevent cheating

 

Researchers at Rensselaer Polytechnic Institute in Troy, New York have worked on a method to combat cheating. Their strategy focuses on collusion among students and involves the simple process of staggering the delivery of test questions based on students’ level of competency. In this way, the most accomplished students begin the test after others have already started, while still giving everyone the same amount of time to answer (Burt, 2021).

 

In Gallant and Stephens (2020), Dr Tricia Bertram Gallant (University of California) and Dr Jason M. Stephens (University of Auckland) argue that colleges and universities have an ethical obligation to respond to the problem of cheating and should encourage students’ moral and civic development. They recommend a shift from the punitive approach to a developmental one when dealing with cheating, working from a preventive perspective.

 

 

Ethical dilemmas in the use of proctoring systems

The pandemic has accelerated the use of proctoring systems, especially in the US and Europe. This artificial intelligence-based software checks the student’s computer or device during an online exam and acquires a lot of personal data, making it possible to find out whether students are cheating. Other solutions also try to verify the identity of the student that is being tested. 

 

Proctoring systems generally work in the same way: students connect to a reserved page on the university’s website, access the test room and start a sort of preliminary check that allows them to view the entire room, test their microphone and show their identity card. The actual test then starts and the camera monitors for any suspicious movements and marks/highlights them so that the teacher can review that specific moment and ascertain whether the student was in fact cheating or not. Identification takes place via the webcam and microphone, checking for any suspicious noises or voices. Maximum attention is paid to body movements, the student’s gaze, and all mouse/touchpad, keyboard and screen activity. 

With e-proctoring, there are two main problems: personal data processing and bias conflicting with the software. Although proctoring is used to prevent students from cheating, there are many concerns about invasive data processing, the risk of loss of data and consent obligations, etc. that have prompted some students to rebel against it. In Italy, for instance, Bocconi University has received a €200,000 fine for using a proctoring system (Dimalta, 2021).

 

While these legal matters regarding data protection are being resolved, some institutions have turned to alternative options. At the University of the People, while some students choose to be examined via ProctorU for a fee, there is an alternative face-to-face proctoring option for sitting final exams. In this case, the exam is sent to a proctor who must be chosen by the students themselves (following a set of rules and free of charge). They must then identify themselves to this person and, finally, take the exam in front of this supervisor figure (UoP, 2022).

 

 

Conclusions

Being forced to go digital by the pandemic cannot be the only motivation for digitally transforming university and school assessment methods. Both electronic media and society are evolving, and educational institutions must continue to build on the lessons they have learned. Apart from the first rapid adaptation measures, real and profound pedagogical and technological work must be behind this shift away from traditional assessment. E-assessment needs to be underpinned by instructional design and pedagogy, ethical principles, data protection laws and other types of mid-course testing and student-teacher feedback. In terms of student response, although we do not know what will happen in the future, practice and institutional support should presumably lead them to feel more confident with electronic media.

 

 

References

Burt, C. (1 March 2021). RPI researchers reveal a new method to reduce deception. University Business. Available at: https://universitybusiness.com/rpi-researchers-uncover-new-method-to-reduce-cheating/ 

 

Dimalta, D. (30 September 2021). Controllo remoto degli studenti, vizio di tanti: il Garante non sanzioni solo Bocconi. Agenda Digitale: Network Digital 360. Available at: https://www.agendadigitale.eu/sicurezza/privacy/controllo-remoto-degli-studenti-tanti-peccano-il-garante-non-sanzioni-solo-bocconi/ 

 

Gallant, T.B., Stephens, J.M. (2020). Punishment Is Not Enough: The Moral Imperative of Responding to Cheating With a Developmental Approach. Journal of College and Character, 21:2, 57-66, DOI: 10.1080/2194587X.2020.1741395. Available at: https://www.tandfonline.com/doi/abs/10.1080/2194587X.2020.1741395?journalCode=ujcc20 

 

García-Peñalvo, F. J., Corell, A., Abella-García, V., & Grande, M. (2020). Online assessment in higher education at the time of COVID-19. Education in the Knowledge Society, 21 https://doi.org/10.14201/eks.23086. Available at https://revistas.usal.es/index.php/eks/article/view/eks20202112 

 

JISC. (2007). Effective practice with e-assessment. HEFCE. Available at http://www.jisc.ac.uk/media/documents/theme 

 

Nikolova, M. (2012). Characteristics and Forms of the Electronic Assessment of the Knowledge – Proceedings, Русе Volume 51, book 6.1, Mathematics, Informatics and Physics, Ruse, 2012, (93-98), ISSN 1311-3321. Available at https://conf.uni-ruse.bg/bg/docs/cp12/6.2/6.2-15.pdf 

 

Nirchi, S. (2021). La valutazione dei e nei sistemi formative e-learning. Roma: Roma TrE-Press. Available at: https://romatrepress.uniroma3.it/libro/la-valutazione-dei-e-nei-sistemi-formativi-e-learning/ 

 

Palma, E. (28 April 2020). Esami universitari online, tra incertezze, paure e ansie degli studenti: “Preoccupano possibili problemi di rete”. New Sicilia. Available at: https://newsicilia.it/agrigento/cronaca/esami-universitari-online-tra-incertezze-paure-e-ansie-degli-studenti-preoccupano-possibili-problemi-di-rete/551050 

 

Palmiero, C., & Cecconi, L. (2019). Use of Learning Analytics in formative and summative evaluation. Journal of E-Learning and Knowledge Society, 15(3), 89-99. https://doi.org/10.20368/1971-8829/1135019. Available at: https://www.je-lks.org/ojs/index.php/Je-LKS_EN/article/view/1135019 

 

Rovai, A. P. (2000). Online and traditional assessments: what is the difference?. The Internet and Higher Education, Volume 3, Issue 3, p. 141-151. https://doi.org/10.1016/S1096-7516(01)00028-8. Available at https://www.sciencedirect.com/science/article/pii/S1096751601000288 

 

Ranieri, M., & Nardi, A. (2018). Su carta o sullo schermo? Studio sulle percezioni delle verifiche digitali in ambito universitario. Italian Journal of Educational Technology, 26(3), 56-70. https://doi.org/10.17471/2499-4324/1011. Available at: https://ijet.itd.cnr.it/article/view/1011

 

University of the People (2022). Proctored exams. Available at: https://catalog.uopeople.edu/ug_term1_item/academic-regulations/proctored-exams 

 

(Visited 121 times, 1 visits today)
About the authors
Desirée Rosa Gómez Cardosa
Specialist in educational innovation in the Observation Operative Group of the eLearning Innovation Center of the Universitat Oberta de Catalunya. Her speciality is the detection and analysis of educational trends, innovation and technology in the Observatory of Educational Trends and Innovation of the eLinC. She holds a BA and MA in Art History from the Universitat de Barcelona and a Postgraduate Degree in e-learning Management from the UOC.
With degrees in Modern Literature (1995) and Information and Library Studies (1999),she is now in her third year of the Digital Education course at the University of Modena and Reggio Emilia and her Erasmus traineeship at the UOC. She is a secondary-school humanities teacher. She was a librarian for ten years and strongly believes in lifelong learning.