JOURNAL ARTICLE

QUESTION ANSWERING SYSTEM FOR HOSPITALITY DOMAIN USING TRANSFORMER-BASED LANGUAGE MODELS

Sathish Sathish Dhanasegar

Year: 2022 Journal:   International Research Journal of Computer Science Vol: 9 (5)Pages: 110-134

Abstract

Recent research demonstrates significant success on a wide range of Natural Language Processing (NLP) tasks by utilizing Transformer architectures. Question answering (QA) is an important aspect of the NLP task. The systems enable users to ask a question in natural language and receive an answer accordingly. Most questions in the hospitality industry are content-based, with the expected response being accurate data rather than”yes” or ”no.” Therefore, it requires the system to understand the semantics of the questions and return relevant answers. Despite several advancements in transformer-based models for QA, we are interested in evaluating how it performs with unlabeled data using a pre-trained model, which could also define-tune. This project aims to develop a Question-Answering system for the hospitality domain, in which text will have hospitality content, and the user will be able to ask a question about them. We use an Attention mechanism to train a span-based model that predicts the position of the start and end tokens in a paragraph. By using the model, the users can directly type in their questions in the interactive user interface and receive the response. The data set for this study is created using response templates from the existing dialogue system. We use the Stanford Question and Answer (SQuAD 2.0) data structure to form the dataset, which is mostly used for QA models. During phase1, we evaluate the pre-trained QA models BERT, ROBERTa, and DistilBERT to predict answers and measure the results using Exact Match(EM) and ROUGE-LF1-Score. In Phase 2 of the project, we fine-tune the QA models and their hyper-parameters by training the model with hospitality data sets, and the results are compared. The fine-tuned ROBERTa models achieved the maximum of ROUGE-L F1-Score and EM of 71.39 and 52.17, respectively, which is a relatively 4% increase in F1-Score and 8.7% increase in EM score compared to the pre-trained model. The results of this project will be used to improve the efficiency of the dialogue system in the hospitality industry.

Keywords:
Computer science Question answering Ask price Transformer Paragraph Sentence Artificial intelligence Natural language processing Natural language Information retrieval World Wide Web

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.05
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

Urvashi KhannaDiego Mollá

Journal:   arXiv (Cornell University) Year: 2021 Pages: 247-257
JOURNAL ARTICLE

Gradual unfreezing transformer-based language models for biomedical question answering

Khanna, Urvashi

Journal:   OPAL (Open@LaTrobe) (La Trobe University) Year: 2022
DISSERTATION

Using language models in question answering

Andreas Merkel

University:   SciDok (Saarland University and State Library) Year: 2008
© 2026 ScienceGate Book Chapters — All rights reserved.