融合注意力机制和 Bi‑LSTM 网络的车辆辅助 桥梁损伤评估
Vehicle‑assisted bridge damage assessment by combining attention mechanism and Bi‑LSTM network
-
摘要: 基于车辆辅助的桥梁损伤识别具有巨大应用潜力,但仍难以从多源监测数据中提取损伤敏感特征,进而准确 评估桥梁损伤状态。为此,提出了基于长短时记忆网络的注意力加权特征融合模型(ALFF‐Net)。该模型通过预 置数据重构层,提高了 Bi‐LSTM 单元对时间序列多尺度特征信息的感知能力。同时结合注意力机制和特征融合策 略,降低了深度神经网络下游分支的预测难度,进一步提升了模型对序列数据重要依赖关系的建模能力。通过 车‐桥耦合仿真生成了不同路面不平整度和车速下的监测数据集,对 ALFF‐Net 模型的桥梁损伤识别性能进行综合 测试。结果表明:ALFF‐Net 模型较经典 LSTM 网络在显著降低计算成本的同时,损伤识别准确率最高可提升 19.30%,且各级路面不平整度下的识别误差均小于 3%。进一步地,通过对比 ALFF‐Net 模型在不同监测数据驱动 方案下的识别精度,验证了协同多源监测数据的桥梁结构损伤检测结果更为鲁棒。Abstract: Vehicle-assisted bridge damage identification has great application potential, but it is still difficult to extract damage-sen‐ sitive features from multi-source monitoring data and accurately evaluate the bridge damage status. To solve this problem, an At‐ tention-LSTM-based Feature Fusion Model (ALFF-Net) is proposed. The model improves the perception ability of Bi-LSTM cells for multi-scale feature information in time series data through a preset data reconstruction layer. Furthermore, by employing attention mechanism and feature fusion strategy, the model reduces the prediction difficulty of downstream branches of deep neural networks and further improves the modeling ability for the important dependency relationships in the sequence data. A monitoring dataset under different road roughness and vehicle speeds is generated through a vehicle-bridge interaction system simulation, and the bridge damage identification performance of the ALFF-Net model is comprehensively tested. The results show that the ALFF-Net model improves the damage identification accuracy by up to 19.30% compared to the classical LSTM network while signifi‐ cantly reducing computational costs, and the identification errors under different road roughness levels are less than 3%. More‐ over, by comparing the identification accuracy of the ALFF-Net model under different data-driven schemes, the robustness of the bridge damage detection results with synergistic multi-source monitoring data is verified.