This study explores the effectiveness of combining BERT (Bidirectional Encoder Representations from Transformers) with convolutional neural networks (CNN) and multilayer perceptrons (MLP) for a specific task. The results showcase promising performance of the BERT + CNN model, with precision, recall, and F1-score values of 0.91, 0.86, and 0.89, respectively. The BERT + MLP model exhibits consistent performance with an F1-score of 0.875. A comparative analysis against a previous study utilizing IndoBERT highlights the competitive edge of our BERT + CNN model, particularly in terms of recall and F1-score. Additionally, our proposed model demonstrates competitive performance against other state-of-the- art models such as RoBERTa and xlmRoBERTa. This study contributes valuable insights into the optimization of BERT -based models for specific tasks, emphasizing the efficacy of the BERT + CNN architecture.
Muhammad Edo SyahputraAde Putera KemalaDimas Ramdhan
Muhammad Edo SyahputraAde Putera KemalaDimas Ramdhan