Aoshuang YeLina WangLei ZhaoJianpeng Ke
Deep learning (DL) defines a data-driven paradigm that differs from conventional software. It utilizes training data to construct the internal logic of the deep neural networks. With aggressive development in various security-sensitive domains, deep learning raises safety concerns in academia and industrial community. Numerous researches have shown that even the most advanced deep learning systems have vulnerabilities leading to misbehaviors. Despite the urgent security threat, fuzzing test remains a reasonable way to solve the problem. However, The static parameters design in current mainstream neuron coverage disables the fuzzing work diversely on heterogeneous architectures. To address the above problems, we propose the Dynamic Adaptive Neuron Coverage (DANCe) to model the neuron behavior, which can adapt diverse models with implementing training data. The dynamic adaption mechanism enhances the ability of coverage-based fuzzing to generate adversarial examples as test cases. The proposed model is evaluated on two widely-adopted image datasets and four well-designed deep neural networks. The experimental results show that the DANCe exceeds the SOTA coverage criteria by 7% and 33% on generating adversarial examples within 1 hour and 6 hours of time limitation.
Weiguo LinYongbing XieRuitong LiuZexu DangJinyi HaoTeng LiZhuo MaJianfeng Ma
Li HuangWeifeng SunMeng YanZhongxin LiuYan LeiDavid Lo
Xiaofei XieHongxu ChenYi LiLei MaYang LiuJianjun Zhao
Leo Hyun ParkSoochang ChungJaeuk KimTaekyoung Kwon
Zhou YangJieke ShiMuhammad Hilmi AsyrofiDavid Lo