Yang DaiYunlong LiShixian XuanYuheng DaiTao XuHu Yu
The way people interact with machines through flexible acoustic sensors is revolutionizing the way we live. However, developing a human-machine interaction acoustic sensor that simultaneously offers low cost, high stability, high fidelity, and high sensitivity remains a significant challenge. In this study, a sensor based on a sound-driven triboelectric nanogenerator was proposed. A poly(vinylidene fluoride) (PVDF)/graphene oxide (GO) composite nanofiber film was obtained as the dielectric layer through electrospinning, and copper-nickel alloy conductive fabric was used as the electrode. An imitation embroidery shed structure was designed in the shape of a ring to secure the upper and lower electrodes and the dielectric layer as a whole. Due to the porosity of the electrode, the large contact area of the dielectric layer, and the high stability of the imitation embroidery shed structure, the sensor achieves a sensitivity of 4.76 V·Pa-1 and a frequency response range of 20-2000 Hz. A multilayer attention convolutional network (MLACN) was designed for speech recognition. The designed speech recognition system achieved a 99.5% accuracy rate in recognizing common word pronunciations. The integration of sound-driven triboelectric nanogenerator-based flexible acoustic sensors with deep learning techniques holds great promise in the field of human-machine interaction.
Dingyu LiXiaolan TanShidi YueGuangao HuangLing Fang
Chang LiuHaonan FengZhuhang DaiHai Feng ZhangHaoxiang MaJia DuYaxiaer YalikunChang Bao HanChenjing ShangYang Yang
Yijun HaoJiayi YANGMeiqi WangZihao NiuHaopeng LIUYong QinWei SuHongke ZHANGChuguo ZHANGXiuhan Li
Jiacheng WangXiaotong ZhouXin BaiShuo WangYuan FangMei Liu
Xudong WangJiaming LiangYuxiang XiaoYichuan WuYang DengXiaohao WangMin Zhang