Abstract:
Natural disasters affect 350 million people annually, in addition to financial losses amounting to billions of dollars. When these disasters occur, a quick and accurate response is extremely important. Therefore, obtaining correct information about damage locations leads to a rapid and effective response by rescue teams, thus saving the largest number of lives. Rescue teams rely on satellite images to determine the affected locations, in addition to the severity of the damage and its causes. However, rescue teams need to follow a specific approach that enables them to analyze huge amounts of satellite images accurately and quickly, which represents a major challenge for them. Deep Learning can be used to overcome these challenges and provide assistance and support efforts. In this research, Siamese U-Net deep learning system with attention technique was applied on two groups of satellite images (pre- and post-disaster) for semantic segmentation of buildings and damage level classification. Two-stream of U-network was used to generate a buildings segmentation mask as a first step. Then, the decoder extracts high-dimensional feature vectors through various operations to generate damage classification mask. Self-attention modules were included to capture important information, thus enabling the system to focus on the areas surrounding buildings. The proposed system was evaluated on xBD, a benchmark dataset for building damage assessment, and achieved the best segmentation and classification results by conducting several numerical and visual comparisons with related works that used the same dataset, and it also provided a higher degree of generalizability and reliability.