Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images

Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural network...

Full description

Saved in:
Bibliographic Details
Main Authors: A. Gibril, Mohamed Barakat, Al-Ruzouq, Rami, Shanableh, Abdallah, Jena, Ratiranjan, Bolcek, Jan, Mohd Shafri, Helmi Zulhaidi, Ghorbanzadeh, Omid
Format: Article
Language:English
Published: Elsevier 2024
Online Access:http://psasir.upm.edu.my/id/eprint/112078/1/1-s2.0-S0273117724002205-main.pdf
http://psasir.upm.edu.my/id/eprint/112078/
https://www.sciencedirect.com/science/article/pii/S0273117724002205?via%3Dihub
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes. © 2024 COSPAR