Abstract

Deep Neural Networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety-critical domains. However, traditional software test coverage metrics cannot be applied directly to DNNs. In this paper, inspired by the MC/DC coverage criterion, we propose four novel test criteria that are tailored to structural features of DNNs and their semantics. We validate the criteria by demonstrating that the generated test inputs, guided by our coverage criteria, are able to capture undesirable behaviours in DNNs. Test cases are generated using both a symbolic approach and a gradient-based heuristic. Our experiments are conducted on state-of-the-art DNNs, obtained using the MNIST and ImageNet datasets.

Keywords:
MNIST database Computer science Deep neural networks Heuristic Artificial intelligence Artificial neural network Code coverage Range (aeronautics) Software Machine learning Semantics (computer science) Test case Programming language

Metrics

34
Cited By
2.92
FWCI (Field Weighted Citation Impact)
6
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
Software Testing and Debugging Techniques
Physical Sciences →  Computer Science →  Software
Anomaly Detection Techniques and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.