JOURNAL ARTICLE

Evaluation of automatically generated English vocabulary questions

Yuni SusantiTakenobu TokunagaHitoshi NishikawaHiroyuki Obari

Year: 2017 Journal:   Research and Practice in Technology Enhanced Learning Vol: 12 (1)Pages: 11-11   Publisher: Springer Nature

Abstract

This paper describes details of the evaluation experiments for questions created by an automatic question generation system. Given a target word and one of its word senses, the system generates a multiple-choice English vocabulary question asking for the closest in meaning to the target word in the reading passage. Two kinds of evaluation were conducted considering two aspects: (1) measuring English learners' proficiency and (2) their similarity to the human-made questions. The first evaluation is based on the responses from English learners obtained through administering the machine-generated and human-made questions to them, and the second is based on the subjective judgement by English teachers. Both evaluations showed that the machine-generated questions were able to achieve a comparable level with the human-made questions in both measuring English proficiency and similarity.

Keywords:
Computer science Similarity (geometry) Vocabulary Meaning (existential) Natural language processing Judgement Word (group theory) Artificial intelligence Reading (process) Linguistics Psychology

Metrics

22
Cited By
1.83
FWCI (Field Weighted Citation Impact)
27
Refs
0.87
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Speech and dialogue systems
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.