Perceptions of AI-Based Assessment Tools in Higher Education
DOI:
https://doi.org/10.36902/jcas.v2.i1.12Keywords:
AI Assessment, Higher Education, Automated Grading, Academic Integrity, Perceived Fairness, Privacy, Technology AcceptanceAbstract
Artificial intelligence (AI) is changing assessment in higher education in terms of automatic grading, adaptive testing, intelligent feedback, and distance proctoring. The paper considers the perceptions of AI-based assessment tools that hold popularity among the stakeholders of higher education (students, instructors, and administrators) in the context of perceived usefulness, fairness, reliability, ethical issues (privacy and bias), and impacts on academic integrity and learning outcomes. The study will gather quantitative survey data (N = 420) and qualitative interview data (n = 24) in a diverse sample of universities that have piloted or implemented AI assessment tools in 2020-2025 using a mixed-methods research design. Validated perceived usefulness, trust in automation, perceived fairness, privacy, and self-reported academic behavior are quantitative measures. Independent-samples t-tests to compare the attitudes of the students and instructors, multiple regression to predict the acceptance of the AI assessment based on the perceived usefulness and ethical issues, and ANOVA to determine variations across the disciplines and previous exposure to the AI tools are all tests of hypothesis. The thematic examination of human attitudes to pedagogy, academic integrity, transparency, and professional development requirements can be discussed as qualitative.
The results show a conditional approval of the use of AI-based assessment: stakeholders appreciate efficiency and customized feedback, but they have constant reservations about the bias of algorithms, obscure grading, security threats to privacy through data transcriptions and remote proctoring, and the possible loss of human-driven assessment habits. The results of regression indicate that perceived usefulness ( =.42, p =.001), and perceived fairness ( =.29, p =.002) have positive prediction on acceptance, and privacy concern has negative prediction on acceptance ( = -.31, p =.001). T-tests are more persuasive of pragmatic acceptance among students than among instructors (t(418) = 3.12, p =.002). ANOVA shows that there is a difference in disciplinary difference: STEM participants indicated that they placed more trust in automated grading of objective tasks compared to humanities participants (F(3,416) = 6.21, p < .001). Qualitative topics talked of the necessity of human review, articulation of scoring reasons, strong data management, and employee education.
The study has theoretical value, that is, it incorporates a technology acceptance and socio-technical fairness perspective and presents more practical recommendations: design transparent algorithms, embrace layered integrity, train educators, and engage stakeholders in assessment redesign. The authors conclude that AI-based testing can complement higher education assessment activities provided that the implementation is supported with safeguards against fairness, transparency, and pedagogic fit.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Sajida Batool, Naeem Akhtar, Azra Jamil (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.