What is it about?
Damage analysis in social media platforms such as Twitter is a comprehensive problem which involves different subtasks for mining damage-related information from tweets (e.g., informativeness, humanitarian categories and severity assessment). The comprehensive information obtained by damage analysis enables to identify breaking events around the world in real-time and hence provides aids in emergency responses. Recently, with the rapid development of web technologies, multimodal damage analysis has received increasing attentions due to users’ preference of posting multimodal information in social media. Multimodal damage analysis leverages the associated image modality to improve the identification of damagerelated information in social media. However, existing works on multimodal damage analysis address each damage-related subtask individually and do not consider their joint training mechanism. In this work, we propose the Bidirectional Multi-task Cascaded multimodal Fusion (BiMCF) approach towards joint multimodal damage analysis. To this end, we introduce the cascaded multimodal fusion framework to separately integrate effective visual and text information for each task, considering that different tasks attend to different information. To exploit the interactions across tasks, bidirectional propagation of the attended image-text interactive information is implemented between tasks, which can lead to enhanced multimodal fusion. Comprehensive experiments are conducted to validate the effectiveness of the proposed approach.
Featured Image
Read the Original
This page is a summary of: Damage Analysis via Bidirectional Multi-Task Cascaded Multimodal Fusion, April 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3696410.3714609.
You can read the full text:
Contributors
The following have contributed to this page







