A methodology for using large language models to compile funds of assessment tools in higher education
Journal: RUDN Journal of Informatization in Education (Vol.23, No. 1)Publication Date: 2026-03-02
Authors : Dmitry Nazarov; Svetlana Begicheva;
Page : 57-74
Keywords : сompetency-based assessment model; accreditation expertise; query engineering; assessment materials specification; attention mechanism; transformers; validity; reliability;
Abstract
Problem statement. In modern university education, there is a steadily growing need for scalable and normatively correct assessment tools that ensure objective verification of the development of competencies and compliance with accreditation requirements. Practice shows a shortage of unified procedures and significant labor costs in the manual development of tasks, as well as differences in methodological approaches between departments. Against this background, generative artificial intelligence technologies are in demand, allowing to accelerate the preparation of different types of tasks without compromising quality. Methodology. The study is based on the principles of the competence-based assessment model and the architecture of transformers (attention mechanism) as a theoretical basis for task generation. A reproducible algorithm is proposed: specification of assessment tools in relation to the matrix of competencies and work programs of disciplines; query engineering with requirements for accuracy and structure; generation; expert verification and pilot testing; refinement and integration. The pilot was conducted on the material of the discipline Neural Network Algorithms of the Business Informatics direction. Tests with multiple choice answers, assignments for compliance and sequence, as well as cases with code correction were formed. Labor costs, comprehensibility, level of complexity and content validity were assessed. Results. The approach made it possible to significantly shorten the development cycle: generation of the task bank took about hours instead of the typical several days; as a result, 50 questions were included in the assessment tool fund. Of the 120 generated positions, 17 questions and 23 answers required editorial corrections; the pilot with the participation of students showed high comprehensibility of the wording at a medium and high level of complexity, and the expert assessment confirmed the compliance of the content with the goals of the discipline and the requirements of objectivity. Conclusion. The presented methodology ensures reproducible formation of a fund of assessment tools, reduces the labor intensity of preparing materials and increases the manageability of the quality of assessment in the context of accreditation procedures. The universality of the approach allows its extension to other disciplines by adapting the specifications and query templates. The practical value is enhanced by integration with Learning Management System and internal quality control procedures. Prospects are associated with the expansion of psychometric testing (reliability, fairness), the development of subject-oriented query libraries and further automation of results analysis.
Other Latest Articles
- Artificial intelligence in the practice of foreign language learning among students of Russian and Serbian universities
- Multimodal verification of pedagogical features using SAR analysis
- Internet resources in enhancing teacher professional development strategies for educational counsellors in the information era
- O‘ZBEKISTONDA GENDER TENGLIK MADANIYATINI RIVOJLANTIRISHNING IJTIMOIY-PSIXOLOGIK JIHATLARI
- O‘ZBEKISTONDA GENDER TENGLIK MADANIYATINI RIVOJLANTIRISHNING IJTIMOIY-PSIXOLOGIK JIHATLARI
Last modified: 2026-03-02 04:10:06
Share Your Research, Maximize Your Social Impacts


