JOURNAL ARTICLE

Textual Adversarial Attacks on Named Entity Recognition in a Hard Label Black Box Setting

Abstract

Named entity recognition is a key task in the field of natural language processing, which plays a key role in many downstream tasks. Adversarial examples attack based on hard label black box is to generate adversarial examples which make the model classification wrong under the condition that only the decision results of the model are obtained. However, at present, there is little research on adversarial examples attack in hard-label black box setting for named entity recognition task. Influenced by adversarial examples attacks in hard-label black box settings in text classification task, we apply genetic algorithm to adversarial examples attacks in named entity recognition task. In this paper, we first randomly generate the initial adversarial examples, and shorten the search space to a certain extent, and then use genetic algorithm to continuously optimize the examples, and finally generate high quality adversarial examples. Experiments and analysis show that the adversarial examples generated in the hard label black box setting can effectively reduce the accuracy of the model.

Keywords:
Adversarial system Computer science Black box Task (project management) Key (lock) Artificial intelligence Machine learning Field (mathematics) Natural language processing Computer security Engineering Mathematics

Metrics

2
Cited By
0.39
FWCI (Field Weighted Citation Impact)
25
Refs
0.62
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.