JOURNAL ARTICLE

Structured prediction for urban scene semantic segmentation with geographic context

Abstract

In this work we address the problem of semantic segmentation of urban remote sensing images into land cover maps. We propose to tackle this task by learning the geographic context of classes and use it to favor or discourage certain spatial configuration of label assignments. For this reason, we learn from training data two spatial priors enforcing different key aspects of the geographical space: local co-occurrence and relative location of land cover classes. We propose to embed these geographic context potentials into a pairwise conditional random field (CRF) which models them jointly with unary potentials from a random forest (RF) classifier. We train the RF on a large set of descriptors which allow to properly account for the class appearance variations induced by the high spatial resolution. We evaluate our approach by an exhaustive experimental comparisons on a set of 20 QuickBird pansharpened multi-spectral images.

Keywords:
Computer science Conditional random field Spatial contextual awareness Segmentation Land cover Artificial intelligence Random forest Pairwise comparison Classifier (UML) Context (archaeology) Pattern recognition (psychology) Machine learning Geography Cartography Land use

Metrics

15
Cited By
2.37
FWCI (Field Weighted Citation Impact)
15
Refs
0.91
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology
Automated Road and Building Extraction
Physical Sciences →  Engineering →  Ocean Engineering
Remote Sensing and Land Use
Physical Sciences →  Earth and Planetary Sciences →  Atmospheric Science
© 2026 ScienceGate Book Chapters — All rights reserved.