JOURNAL ARTICLE

On the Effects of Automatically Generated Adjunct Questions for Search as Learning

Abstract

<p>Actively engaging learners with learning materials has been shown to be very important in the Search as Learning (SAL) setting. One active reading strategy relies on asking so-called adjunct questions, i.e., manually curated questions geared towards essential concepts of the target material. However, manual question creation is impractical given the vast online content. Recent research has explored the effects of Automatic Question Generation (AQG) on aiding human learning. These studies have primarily focused on user studies in controlled online reading scenarios with limited documents. However, the impacts of adjunct questions on learning in the SAL setting, which involves learning through web searching, are not yet well understood. This paper addresses this gap by conducting a user study with automatically generated adjunct questions integrated into the reading interface built on top of a search system. We conducted a between-subjects user study (N = 144) to investigate the incorporation of automatically generated adjunct questions on participants' learning. We employed three different question generation strategies as well as a control condition: (i) synthesis questions; (ii) factoid questions targeting random text spans; and (iii) factoid questions targeting terms and phrases relevant to the information need at hand. We present four major findings: (i) participants who received adjunct questions exhibited significantly more fine-grained reading behaviour, such as longer document dwell time and more scrolls, than those without adjunct questions. However, adjunct questions' influence on learning outcomes depends on the AQG strategy. (ii) Question types significantly influence participants' reading behaviour. (iii) The adjunct questions' target spans significantly influence learning outcomes. Lastly, (iv) participants' prior knowledge levels affect adjunct questions' effects on their learning outcomes and their reaction to different AQG strategies. Our findings have significant design implications for learning-oriented search systems. The data and code is available at https://github.com/zpeide/AQG-AdjunctQuestions.</p>

Keywords:
Adjunct Computer science Reading (process) Information retrieval Control (management) World Wide Web Artificial intelligence Human–computer interaction Linguistics

Metrics

1
Cited By
0.64
FWCI (Field Weighted Citation Impact)
44
Refs
0.63
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Expert finding and Q&A systems
Physical Sciences →  Computer Science →  Information Systems
Information Retrieval and Search Behavior
Physical Sciences →  Computer Science →  Information Systems

Related Documents

BOOK-CHAPTER

Adjunct Questions: Effects on Learning

Michele M. Dornisch

Year: 2012 Pages: 128-129
JOURNAL ARTICLE

Evaluation of automatically generated English vocabulary questions

Yuni SusantiTakenobu TokunagaHitoshi NishikawaHiroyuki Obari

Journal:   Research and Practice in Technology Enhanced Learning Year: 2017 Vol: 12 (1)Pages: 11-11
BOOK-CHAPTER

Pedagogical Evaluation of Automatically Generated Questions

Karen MazidiRodney D. Nielsen

Lecture notes in computer science Year: 2014 Pages: 294-299
JOURNAL ARTICLE

Web Search Using Automatically Generated Facets

D. Suresh Babu

Year: 2019 Pages: 37-40
© 2026 ScienceGate Book Chapters — All rights reserved.