Register      Login
International Journal of Wildland Fire International Journal of Wildland Fire Society
Journal of the International Association of Wildland Fire
RESEARCH ARTICLE (Open Access)

Forest fire progress monitoring using dual-polarisation Synthetic Aperture Radar (SAR) images combined with multi-scale segmentation and unsupervised classification

Age Shama A , Rui Zhang https://orcid.org/0000-0002-0809-7682 A * , Ting Wang A , Anmengyun Liu A , Xin Bao A , Jichao Lv A , Yuchun Zhang A and Guoxiang Liu A
+ Author Affiliations
- Author Affiliations

A Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu, 611756, China.

* Correspondence to: zhangrui@swjtu.edu.cn

International Journal of Wildland Fire 33, WF23124 https://doi.org/10.1071/WF23124
Submitted: 29 July 2023  Accepted: 23 November 2023  Published: 21 December 2023

© 2024 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of IAWF. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)

Abstract

Background

The cloud-penetrating and fog-penetrating capability of Synthetic Aperture Radar (SAR) give it the potential for application in forest fire progress monitoring; however, the low extraction accuracy and significant salt-and-pepper noise in SAR remote sensing mapping of the burned area are problems.

Aims

This paper provides a method for accurately extracting the burned area based on fully exploiting the changes in multiple different dimensional feature parameters of dual-polarised SAR images before and after a fire.

Methods

This paper describes forest fire progress monitoring using dual-polarisation SAR images combined with multi-scale segmentation and unsupervised classification. We first constructed polarisation feature and texture feature datasets using multi-scene Sentinel-1 images. A multi-scale segmentation algorithm was then used to generate objects to suppress the salt-and-pepper noise, followed by an unsupervised classification method to extract the burned area.

Key results

The accuracy of burned area extraction in this paper is 91.67%, an improvement of 33.70% compared to the pixel-based classification results.

Conclusions

Compared with the pixel-based method, our method effectively suppresses the salt-and-pepper noise and improves the SAR burned area extraction accuracy.

Implications

The fire monitoring method using SAR images provides a reference for extracting the burned area under continuous cloud or smoke cover.

Keywords: burned areas, forest fire progress monitoring, multi-scale segmentation, polarisation features, Sentinel-1 image, synthetic aperture radar, texture features, unsupervised classification.

Introduction

Forest fires are a natural disaster that is sudden, destructive, and extremely difficult to prevent and control. According to statistics, the number of forest fires globally can reach more than 260 000 per year, an average of more than 700 per day, significantly impacting forest resources and even natural ecosystems (Shiraishi et al. 2021). When forest fires are severe, they can burn down houses, buildings, and various production facilities and equipment in the surrounding villages, jeopardising the safety of people’s lives and properties (Wei et al. 2018; Dixon et al. 2022). At the same time, the combustible materials in the forest will also emit a large amount of smoke and dust during the combustion process, causing atmospheric pollution. Therefore, real time monitoring of wildfires and accurate and complete mapping of burned areas are of great significance in assessing the impacts of wildfires on natural and human ecosystems.

Satellite remote sensing periodically conducts full coverage imaging of the ground surface over a wide area, which provides an efficient technical approach and data support for forest fire monitoring. Among them, multi-spectral remote sensing has been introduced into the field of burned area mapping and loss assessment earlier (Lasaponara et al. 2020; Pinto et al. 2021), but there is a bottleneck in the mapping of burned areas due to the susceptibility of optical images to cloud and smoke obscuration and the interference of isospectral effect of foreign objects (Kalogirou et al. 2014; Stroppiana et al. 2015; Roy et al. 2019). Compared with optical sensors, Polarimetric Synthetic Aperture Radar (PolSAR), as an active microwave imaging system, has the technical advantages of all-weather, all-day, and strong transmittance, which is more suitable for remote sensing monitoring of fires in complex climatic environments. Compared with single-polarisation SAR images, multi-polarisation SAR data can obtain more feature polarisation information and better characterise ground object details and features (Zhang et al. 2022). There have been many preliminary studies in ground object detection, classification, and identification (West et al. 2019; Dostálová et al. 2021; Mishra et al. 2023). However, there are still low extraction accuracy and a significant salt-and-pepper noise in the extraction of burned area.

In terms of burned area classification mapping, most current studies have adopted pixel-by-pixel processing strategies based on pixel-by-pixel classification, but such methods are susceptible to the effects of salt-and-pepper noise. Because salt-and-pepper noise can cause the brightness values of certain pixels to be inconsistent with their actual categories, leading to many misclassifications (Luo et al. 2021; Zhang et al. 2023). In contrast, object-oriented methods aggregating similar pixels into objects and recognising them by their attributes, such as texture and shape, perform better in suppressing the salt-and-pepper noise (Chen et al. 2020; Zhang et al. 2021). To improve the extraction accuracy of burning areas, scholars have introduced supervised classification methods such as random forests to train the classification model with many labelled samples (Gibson et al. 2020). However, the distribution and characteristics of burned areas have certain complexity and spatio-temporal variability, and it is not easy to construct a complete and accurate sample dataset. In contrast, unsupervised classification methods do not require pre-labelled samples. They can automatically discover potential classes and patterns in the data by clustering based only on the statistical features of the data itself (Qu et al. 2021; Foroughnia et al. 2022). Therefore, further construction of polarised SAR unsupervised classification models and algorithms is of great significance for high-precision extraction of burned areas and monitoring forest fire progress.

We propose a forest fire progress monitoring method using dual-polarisation SAR image that combines multi-scale segmentation and unsupervised classification to improve the accuracy and reliability of burned area SAR remote sensing mapping. In this study, Object-oriented unsupervised burned area extraction using the Thomas wildfire as a study object. The burned area extracted from the optical images is used as a reference standard for accuracy assessment and compared and analysed with the pixel-based classification results to demonstrate the method’s feasibility in this paper.

Materials and methods

Aiming at the problems of low extraction accuracy and significant salt-and-pepper noise in SAR remote sensing mapping of burned areas and forest fire progress monitoring, this paper describes a method for forest fire progress monitoring using dual-polarisation SAR images combined with multi-scale segmentation and unsupervised classification. We use Single Looking Complex (SLC) data to carry out backward scattering coefficient ratio, H-A-Alpha decomposition, and grey-level co-occurrence matrix (GLCM) computation to construct feature factors of multiple feature types. The polarisation eigenvectors were selected based on the Pearson correlation coefficient and feature importance. A suitable segmentation scale is then selected for object-oriented segmentation of the polarised feature factors. The burned area is mapped using the k-mean clustering algorithm. Finally, we use the optically extracted burned area as a reference standard to evaluate the accuracy of the SAR-extracted burned area. The flowchart of this paper is in Fig. 1.

Fig. 1.

The workflow of the implementation approach.


WF23124_F1.gif

Study area

As shown in Fig. 2, the Thomas wildfire occurred on the coast of southern California, USA, a region with a Mediterranean climate and a mosaic of hills, mountains, plains, and terraces, with a high degree of development of shrubs and evergreen forests. The fire began on 5 December 2017, and was fully contained on 12 January 2018. The total burned area was approximately 114 078 ha, making it the largest fire in recent modern California history at that time (Zhang et al. 2019).

Fig. 2.

Map of the study area.


WF23124_F2.gif

Sentinel-1 dataset

The Sentinel-1 satellite consists of two satellites (Sentinel-1A and Sentinel-1B), which as successfully launched on 3 April 2014 and 25 April 2016. They have carried the same type of C-band polarised SAR system, which has the characteristics of a short revisit period, wide coverage area, high resolution, and all-weather/all-weather imaging. We chose a 5-scene Sentinel-1 IW-mode SLC image with VV + VH dual-polarisation mode. For information on Sentinel-1 image parameter, see Supplementary Table S1.

Reference data

The spatial resolution of Sentinel-2 multi-spectral remote sensing data is 10, 20, and 60 m, while the temporal resolution is 5 days. Cloud-free Sentinel-2A remote sensing images before and after the Thomas wildfire were selected as reference data to validate the feasibility and reliability of the burned area extraction method in this paper. The normalised burn ratio (NBR) is a widely used index for estimating burn severity and monitoring burned areas. It is calculated using the near-infrared band (NIR, wavelengths 0.85–0.88 μm) and the short-wave infrared band (SWIR2, wavelengths 2.11–2.29 μm). The calculation formula is as (Sismanis et al. 2023):

(1)NBR=NIRSWIR2NIR+SWIR2

Calculating the difference in NBR (dNBR) before and after a wildfire can better estimate the area burned. The dNBR is calculated as:

(2)dNBR=NBRpreNBRpost

where NBRpre and NBRpost are NBR images before and after the wildfire, respectively.

Supplementary Fig. S1. shows a map of pre-fire vegetation types in the Thomas Wildfire and the normalized difference vegetation index (NDVI) map. Reference ranges for the Thomas wildfire were obtained by combining California Fire Perimeters vector data and visual interpretation of 12 January 2018, Sentinel-2 imagery. Land cover maps are from the US National Land Cover Database (NLCD-2011), based on 30-m Landsat data (Homer et al. 2015). As can be seen from the land cover map, the primary vegetation type in the burn area is shrubs, followed by evergreen forests and mixed forests. NDVI was calculated using post-fire Sentinel-2 imagery, and the NDVI map indicates that the lower right portion of the burned area has much less vegetation cover than the upper left portion.

Constructing feature datasets

The radar signal before the burn was mainly derived from the vegetation canopy, and the forest fire burned the canopy, the understorey, and the surface vegetation successively. The radar signal reaches the ground, interacts with the soil surface, and returns the radar signal, and the main factor affecting the radar backscatter value is soil moisture. Therefore, this paper uses the backscatter parameters of different polarisation methods to construct the Backscatter Burn Ratio (BBR), which is calculated by:

(3)BBRγ=12(γ p o s t v v γ p r e v v +γ p o s t v h γ p r e v h )

where γpre represents the Sigma value of the backward scattering coefficient or the Gamma value of the normalised backward scattering coefficient before the fire; and γpost represents the Sigma or Gamma value after the fire.

Forest fires destroy pre-existing vegetation structures, altering the backscattering mechanisms of the burned area. Before the fire, the target scattering echo mainly originates from the canopy of the vegetation, characterised by volume scattering. After the fire, the vegetation canopy is significantly reduced, and the target scattered echoes mainly originate from the soil surface, characterised by surface scattering. Polarisation target decomposition, the primary method for extracting polarisation scattering features, can identify different types of scattering mechanisms. First, the H-A-Alpha polarisation decomposition method is used to obtain the polarisation parameters such as entropy, scattering angle, and anisotropy. Second, the polarisation burn difference is constructed based on the pre-fire and post-fire polarisation parameters. The formula for its calculation is:

(4)Dδ={ δ post δ pre, δ¯ post δ¯ pre δ pre δ post, δ¯ post< δ¯ pre

where Dδ denotes the difference between the pre-fire and post-fire polarisation parameters. δpre and δpost are the entropy, scattering angle, or anti-entropy of the pre-fire and post-fire, respectively. δ̅pre and δ̅post are the average values of the pre-fire and post-fire polarisation parameters in the burned area, respectively.

From the imaging characteristics of SAR images, it is known that the grey scale of SAR images reflects the backward scattering characteristics of ground objects to radar waves. Different ground objects with similar backscattering coefficients will show the same or similar grey values in SAR images, thus presenting a more complex grayscale distribution. Therefore, using only image grey-scale-based features to extract the burned area does not obtain satisfactory classification results. We use the texture information of the target’s interior to compensate for the lack of grayscale features and thus improve the accuracy. Texture analysis is used to study the spatial distribution relationship between pixels. In this paper, eight standard features such as mean, contrast, variance, dissimilarity, homogeneity, correlation, information entropy, and angular second moments are extracted using the Grey Level Co-occurrence Matrix (GLCM) method for Sentinel-1 remote sensing images before and after the fire. We used a window size of 9 × 9, an offset distance of 1, and a grey quantisation level of 64 as the parameters for GLCM calculation. At the same time, four directions (0°, 45°, 90°, and 135°) were selected to calculate the average values of the texture feature parameters. We construct the texture feature variation parameter based on the texture features of different polarisation methods with the expression:

(5)pc=c¯ post vv+c¯ post vh2c¯ pre vv+c¯ pre vh2

where Pc is the difference between the texture feature parameters before and after the fire. c̅pre and c̅post denote the average values of the texture features before and after the fire in four directions: 0°, 45°, 90°, and 135°, respectively.

In this paper, 13 feature parameters were calculated using dual-polarised Sentinel-1A images. For detailed feature names and feature meanings, see Table S2.

Feature selection

We have constructed too many feature factors from the perspective of backscatter intensity, scattering mechanism, and SAR texture features. These features are more or less correlated with each other, and some of them have mutual constraints. Therefore, using all these features for burned area extraction will generate data redundancy and affect work efficiency. At the same time, the classification results are often uncertain due to the problem of ‘dimensional catastrophe.’ Therefore, before performing the burning area extraction, we reduce the data redundancy between features by calculating the correlation coefficients between the feature factors.

The Pearson correlation coefficient (Dufera et al. 2023) is a statistic used to reflect the degree of linear correlation between two features, reducing the potential co-linearity of the features. Because of this, this paper uses the Pearson correlation coefficient to determine the correlation between features, aiming to determine whether there is a high degree of correlation between features. The formula calculates the Pearson correlation coefficient:

(6)COR=cov(x,y)var(x)×var(y)

where COR is the Pearson correlation coefficient; x and y are the feature factors; cov(x, y) is the covariance of the feature factors x and y; var(x) and var(y) are the variances of the feature factors.

Multi-scale segmentation and optimal segmentation scale selection

Multi-scale segmentation (Du et al. 2016; Fu et al. 2022) is a top-down region merging technique. The merging process follows a minimal heterogeneity criterion, which considers the spectral and shape characteristics of the images, whose shape characteristics include smoothness and compactness. The heterogeneity criterion is calculated as:

(7)f=(1ω1)hcolor+ω1hshape
(8)hcolor=i=1nλiσi
(9)hshape=(1ω2)hsmooth+ω2hcompact
(10)hsmooth=EL
(11)hcompact=EN

where f is the heterogeneity criterion; hcolour is the spectral feature; hshape is the shape feature; hsmooth is the smoothness; and hcompact is the compactness. ω1 and ω2 are scaling factors; λi denotes the weight occupied by the i band; and σi denotes the standard deviation of the i band. E denotes the number of image pixels contained in the boundary contour of the image object; N denotes the number of image pixels contained in the image object; and L denotes the length of the boundary that contains the smallest outer rectangle within the range of the image object.

From Eqns 7 to 11, it can be seen that spectral features and shape features, smoothness, and compactness are complementary, which means that an increase in the ratio coefficient of one side will correspondingly decrease the ratio coefficient of the other side. Consequently, we used the iterative preference method to determine the shape factor and compactness factor of the study object. We chose the optimal segmentation scale of the study area with the help of the estimation of scale parameters (ESP) method. ESP takes the local variance (LV) of the homogeneity of the image object as the average standard deviation of the segmented object layer, and when the rate of change (ROC) of the LV peaks, the segmentation scale corresponding to that point is the optimal segmentation scale. ROC is calculated as:

(12)ROC=(LiLi1Li1)×100

where Li and Li−1 denote the average standard deviation of the i object layer and i − 1 object layer of the target layer, respectively.

Unsupervised classification algorithms and parameter selection

K-means clustering (Sinaga and Yang 2020) is an unsupervised classification method that is easy to implement and efficient and is the most widely used of all clustering algorithms. This method uses distance as a similarity criterion and assumes a higher degree of similarity the closer two objects are to each other (Abid et al. 2021). The k-means clustering algorithm is a commonly used cluster analysis algorithm with an iterative solution, and the specific steps are as:.

Assume that the dataset is X = {x1, x2, …, xi, …, xn}, the number of clusters is K, and the cluster centre is C = {c1, c2, …, cj, …, ck}.

  1. The K samples from dataset X are randomly selected as the initial clustering centres.

  2. For each sample xi(i = 1, 2, …, n) in the dataset, calculate its distance from the clustering centre cj (j = 1, 2, …, k). The distance is calculated as:

    (13)D( x i, c j)= l=1L ( x i l c j l )2

    where L is the dimension of the sample.

  3. Based on the calculated distance of each sample to the cluster centre, find the minimum distance and divide that sample into the corresponding cluster.

  4. The clustering centres are recalculated and updated according to Eqn 14, and the results of the objective function are calculated using Eqn 15:

    (14) c j = xi | cj |, x i c j
    (15)E= i = 1n xicj d2 ( cj , xi )

  5. Judge the clustering centre and objective function, meet the requirements, and then the algorithm ends; otherwise, continue with step (2).

One of the main problems of the K-means clustering algorithm is determining the optimal number of clusters (the K parameter). Silhouette coefficients are a way of assessing how good or bad the clustering results are. For a sample point in the data set, the Silhouette coefficients are calculated as:

(16)S(xi)=bamax(a,b)

where a represents the average distance of the sample xi to other samples in the same cluster, and the smaller a is indicates that the sample xi is the more it should be clustered into that cluster. b represents the sample xi to the minimum of the average distance from the sample points in the other clusters, assuming that the number of clusters is K, i.e. b = min{b1,b2, …, bk}, where 1 ≤ k ≤ K. The larger the value of S(xi), the tighter the cluster in which the sample point xi is located. The size of the silhouette coefficient is between [−1, 1], and the average silhouette coefficient for each cluster is calculated as:

(17)sk=i=1nks(xi)nk

sk represents the mean value of the silhouette coefficients of the sample points of the kth cluster, where nk shows the number of sample points of the kth cluster, and a larger value of sk indicates a better clustering effect, and vice versa.

Experimental analysis

Data set preparation

First, we normalised all the feature factors to the range of [0−1] to ensure the comparability and consistency of the data to improve the accuracy and reliability of the research results in this paper. Then, using Pearson’s correlation coefficient, a random selection of 100 000 sample points within the study area was used to calculate the correlation between the featured factors. Fig. 3 shows the correlation coefficients for each feature factor.

Fig. 3.

Correlation coefficients of the featured factors.


WF23124_F3.gif

Feature importance is a measure used to evaluate the usefulness of a feature in the model classification process. This paper uses feature importance ranking to reveal the influence of different feature factors on the extraction of burned areas. Fig. 4 shows the importance ranking of feature factors.

Fig. 4.

Importance ranking of feature factors.


WF23124_F4.gif

As shown in Fig. 3, the correlation coefficients between the feature factors of the same feature type are relatively high. For example, the Pearson correlation coefficient between Pvariance, Pcontrast, Pdissimilarity feature factors, and the other seven texture feature factors is the lowest at 0.52, and the Pearson correlation coefficient of the three polarisation decomposition feature factors is the lowest at 0.70. The feature factors are ranked in importance to select more representative feature factors. Combining Pearson’s correlation coefficient and the importance ranking of the features, the texture parameters Pmean and Pvariance, the polarisation decomposition parameters Dalpha and Dentropy, and the backscattering intensity parameters BBRgamma and BBRsigma are finally selected in this paper, and the six selected features are used for the unsupervised, object-oriented extraction of burned area.

Fig. 5 illustrates the spatial distribution of the six feature factors at different times for the Thomas wildfire. The changes within the burned area are not uniform due to the complexity of the terrain, among other reasons, and show different degrees of variability on different parameters. On some parameters, the change in the burned areas is so pronounced that it can be differentiated from the unburned areas, while on some parameters, the change in the burned areas is less pronounced and can easily be confused with the unburned areas. As seen in Fig. 5, Pmean and Pvariance can recognise the range of the combustion zone very well. In addition, other feature factors selected by feature selection also see an approximate range of the combustion zone.

Fig. 5.

Characteristic factor maps of the Thomas fire.


WF23124_F5.gif

Multi-scale segmentation experiment

The optimal segmentation scale of the study area was determined based on the estimation of scale parameters (ESP) analysis. This experiment uses 20 as the initial segmentation scale, the step size is 1, and the weights of each band are all 1 for multi-scale segmentation to obtain the ROC-LV graph (Fig. 6).

Fig. 6.

Multi-scale segmentation results.


WF23124_F6.gif

As shown in Fig. 6, the ROC-LV curves have multiple peaks, such as 62, 74, 88, 97, etc. We chose localised images of the study area for the test to avoid the subjectivity of segmentation scale selection. The ground object category samples are outlined through visual interpretation as the standard for accuracy evaluation, and the over-segmented and under-segmented objects in the local images are analysed. The optimal segmentation scale that meets the study area is finally selected as 97. The segmentation results obtained under the optimal parameters are superimposed on the optical image for display (Fig. S2), showing the multi-scale segmentation algorithm can maximise the retention of the authenticity of the segmented image, and the segmentation boundary better reflects the difference between the combustion and non-combustion regions.

The silhouette coefficient algorithm determines the value of K

Using 100 000 randomly selected sample points from 15 January 2018, Thomas Fire SAR dataset, the optimal number of clusters (K-value) was determined using the silhouette coefficient method. Fig. S3a illustrates the trend of the silhouette coefficient scores calculated for values of K taken from 2 to 20. The highest values of silhouette coefficients are generated by lower k values (Fig. S3a). The next highest value, silhouette score of 0.248, was found when K = 6. When the number of clusters is two, the extraction results of combustion area are shown in Fig. S3b. From Fig. S3b, it can be seen that a large number of non-burned areas are wrongly extracted as burned areas when K = 2, and the extraction results are not good. Therefore, the clustering number used in this study was six.

Burned area extraction and accuracy evaluation

Visual comparison of the pixel-based and object-based classifications (Fig. 7) showed severe fragmentation in the extraction of pixel-based classification. When performing pixel-level classification, the classification algorithm assigns each pixel to its corresponding category, and the salt-and-pepper noise causes the brightness values of some pixels to be inconsistent with their actual categories, leading to misclassification. Therefore, pixel-based classification is affected by the salt-and-pepper noise, which leads to poor fire trace ground extraction. Object-oriented burned area extraction suppressed the salt-and-pepper noise using a multi-scale segmentation method. As a result, the burned area extraction results of this paper’s method have a more continuous distribution and better integrity without apparent fragmentation. The experimental results show that the method in this paper can capture the shape and boundary of the burned area more accurately, which provides a reliable basis for subsequent fire monitoring and management.

Fig. 7.

Classification results.


WF23124_F7.gif

This paper uses a multi-scale segmentation approach to combine neighbouring pixels into larger objects and then performs burned area extraction on these objects. As can be seen from a1 to c3 of Fig. 8, the classification results of this paper’s method are complete in shape, without the phenomenon of many fragmented patches inside the burned area, and the distribution of the burned area is more accurate. As shown from d1 to d3 in Fig. 8, roads, some cultivated land, and bare land have similar backscattering characteristics with the burned area, resulting in their misclassification as burned area. In contrast, the classification method in this paper is processed by multi-scale segmentation, and there is no situation in which many geomorphic features are misclassified.

Fig. 8.

Detailed map of the burning area extraction results. a1–d1 are optical images, a2–d2 are pixel-based classification results, and a3–d3 are this paper’s method’s burned area extraction results.


WF23124_F8.gif

Agreement (burned) in Fig. 9 indicates the area where the burned area extracted based on unsupervised classification of SAR images agrees with the burned area extracted based on optical images. S1 only shows the area where SAR images are commission extracted compared to the burned area based on optical image recognition. S2 only indicates where SAR images are omitted to be removed compared to the burned area based on optical image recognition.

Fig. 9.

A comparison of burned area extraction results based on SAR and optical images for the Thomas wildfire, 15 January 2018. (a) Pixel-based classification, and (b) object-orientated classification. S1 refers to Sentinel-1 and S2 to Sentinel-2.


WF23124_F9.gif

The method of accuracy assessment using the confusion matrix depends on the selected samples, which vary from person to person when selected. Therefore, to evaluate the reliability of the experimental results of this paper more objectively, this paper uses the burned area extracted by Sentinel-2 as the reference data to quantitatively assess the results of the burned area removed on 15 January 2018, and the results of the quantitative evaluation of the Thomas wildfire are in Table 1.

Table 1.A quantitative assessment of the Thomas wildfire on 15 January 2018.

Extraction methodSchemeAgreement (hm2)S1 only (hm2)S2 only (hm2)Extraction accuracy (%)Commission errors (%)Omission errors (%)
Reference dData#182 295.12
Pixel-based classification#247 703.887207.3234 591.2457.978.7642.03
Object-oriented classification#375 453.876264.336841.2591.677.618.31

In this paper, we fully used the various feature change parameters caused by forest fire and combine its scattering intensity, polarisation decomposition, and texture feature change parameters to realise the unsupervised extraction of the burning area. As seen in Table 1, the overall accuracy of the polarised SAR image forest fire progress monitoring method combined with multi-scale segmentation and unsupervised classification proposed in this paper is higher than that of the pixel-based classification method. The extraction accuracy of this our method is 91.67%, and the omission errors and commission errors are 8.31 and 7.61%, respectively. Compared with the pixel-based unsupervised classification method, the overall accuracy of this paper’s method is improved by 33.70%, and omission errors and commission errors are reduced by 33.72 and 1.15%, respectively. These results indicate that the method in this paper exhibits higher accuracy and stability in combustion area extraction and achieves significant improvement compared to the traditional pixel-based method.

Discussion

The difference in feature extraction performance between single-temporal and multi-temporal data

Burning area extraction models were constructed for the two cases using multi-temporal phase and post-fire single-temporal phase polarisation SAR images, respectively. The experimental results are in Fig. 10. As seen in Fig. 10, the use of single-temporal phase feature parameters can only describe the scattering mechanism of a certain aspect due to the influence of complex burning conditions, which has a limited ability to explain forest fires. The forest fire monitoring model constructed using multi-temporal polarisation SAR images can extract the burned area accurately.

Fig. 10.

Burned area extraction results for different feature combinations.


WF23124_F10.gif

The statistical analysis of the burned areas extraction results based on multi-temporal phase features and post-fire single-temporal phase features, respectively (Table 2). The results show that the burning area extraction accuracy based on multi-temporal phase polarisation data is improved by about 30% compared to post-fire single-temporal phase polarisation data, and the commission extraction error and omission extraction error are reduced by about 10 and 32%, respectively. The scattering characteristics of some areas after forest fire burning are relatively similar to those of some unburned ground objects, which are difficult to be distinguished from the single-temporal data after forest fire. Compared with using only post-fire data, the use of multi-temporal phase data not only expresses the difference between the burned area itself and other ground objects but also makes full use of the changes in the burned area before and after the forest fire, providing reliable features for the subsequent extraction of the fire trails.

Table 2.Evaluation of the accuracy of burned area extraction.

Feature typeAgreement (hm2)S1 only (hm2)S2 only (hm2)Extraction accuracy (%)Commission errors (%)Omission errors (%)
Single-temporal after fireBackscattering intensity69 314.3213 269.8612 980.8015.7717.2984.23
Polarisation decomposition13 437.803526.5468 857.3216.334.2883.67
Texture feature41 354.4727 682.2040 940.6550.2533.6349.75
Feature combination48 725.4514 167.3933 569.6759.2117.2140.79
Multi-temporal before and after fireBackscattering intensity69 314.3213 269.8612 980.8084.2316.1215.77
Polarisation decomposition62 385.1319 268.6819 909.9975.8123.4124.19
Texture feature67 238.7512 167.3915 056.3781.7014.7918.30
Feature combination75 453.876264.336841.2591.677.618.31

Optimal feature combination for burned area extraction

The existing remote sensing extraction of burned areas mainly focuses on a single SAR feature type, and the mining of multiple types of features, such as backscatter intensity, polarisation decomposition, texture, etc. of SAR images is not yet sufficient (Shama et al. 2023). In this paper, multiple types of feature parameters caused by forest fires are analysed in depth, and the monitoring capability of forest fires with different polarisation feature combinations is compared. The results of burned area extraction for different feature combinations are in Fig. 10. The forest fire changed parameters such as soil moisture, biomass, roughness, and dielectric constant, which led to the change of SAR backscattering intensity (Belenguer-Plomer et al. 2019, 2021). At the same time, the forest fire burned down the forest canopy destroying the original vegetation structure, thus changing the scattering mechanism of the burned area, which is manifested by a large reduction of the volume-scattered component and an increase of the surface-scattered component from the soil surface. Therefore, as can be seen from the line comparison in Fig. 10, there are limitations in extracting the burned area with only a single type of feature. By combining three types of effective parameters (scattering intensity variation parameter, scattering mechanism variation parameter, and texture feature variation parameter), the burned area extraction results have significantly improved.

Results of the burned area extracted based on different feature combinations were analysed statistically, which indicated: (1) that due to the complex texture of the study area, it is difficult to distinguish the burned area from the non-burned area by using only a single feature type; (2) the extraction accuracy is improved by about 10% after fully considering the parameters of multiple types of feature changes caused by forest fires; and (3) the error of commission extraction and omission of extraction is reduced by about 7 and 10%, respectively. Using different feature combinations can improve the ability to obtain surface information, thus improving the extraction accuracy of the burned area.

Conclusion

Intending to fully exploit the information of Sentinel-1 dual-polarisation data for burned area extraction, this paper described forest fire progress monitoring using dual-polarisation SAR images combined with multi-scale segmentation and unsupervised classification. We comprehensively applied the variation parameters of backscatter intensity, polarisation decomposition, and texture features caused by forest fires as the feature factors for forest fire progress monitoring. On this basis, the estimation of scale parameter algorithm and multi-scale segmentation is applied to the feature dataset to select the optimal segmentation parameters and generate pixel sets as classification units. Object-oriented segmentation techniques suppress the salt-and-pepper noise. Finally, the Silhouette Coefficient method determined the optimal number of clusters, which led to the unsupervised classification of the SAR feature factors to extract the burned areas. We selected five phases of Sentinel-1 series SAR imagery data from 22 November 2017 to 15 January 2018 to verify the feasibility of the model and algorithm, and conducted experiments for the entire spreading process of the Thomas wildfire. The burned areas extracted from Sentinel-2 optical imagery were applied as cross-validation data to assess the accuracy of burned areas extraction and forest fire progression monitoring based on the dual-polarisation SAR. The following conclusions were reached:

  1. Compared with the pixel-based polarised SAR burned area extraction, the object-oriented classification in this paper effectively suppresses the salt-and-pepper phenomenon in the classification results, improves the burned area extraction accuracy of polarised SAR, and obtains smooth and accurate extraction results. This paper’s burned area extraction results highly agreed with those of Sentinel-2, with an Extraction Accuracy of 91.67%, Omission and Commission Errors of 8.31 and 7.61%, respectively. Compared with the pixel-based unsupervised classification method, the accuracy of this paper’s method is improved by 33.70%, and Omission Errors and Commission Errors are reduced by 33.72 and 1.15%, respectively.

  2. The comprehensive discussion and analysis showed that: the use of multi-temporal is more effective than the use of a single temporal phase after the fire to extract the burned area; the combination of multi-feature combinations with the data before and after the forest fire carries out the highest accuracy of the burned area extraction.

Considering the strong penetrability of SAR to smoke, clouds, and fog, the method described in this paper has apparent advantages in monitoring the emergency progress of forest fires by SAR remote sensing, and the related research results can also provide references for the research of high-precision burned area mapping and post-disaster assessment.

Supplementary material

Supplementary material is available online.

Data availability

Data sharing is not applicable as no new data were generated or analysed during this study.

Conflicts of interest

The authors declare no conflicts of interest.

Declaration of funding

This research was jointly funded by the National Key Research and Development Program of China (Grant No. 2023YFB2604001); the National Natural Science Foundation of China (Grant Nos. 42371460, U22A20565, and 42171355).

Acknowledgements

We thank the editors and reviewers for insightful and constructive advice.

References

Abid N, Malik MI, Shahzad M, Shafait F, Ali H, Ghaffar MM, Weis C, Wehn N, Liwicki M (2021) Burnt Forest Estimation from Sentinel-2 Imagery of Australia using Unsupervised Deep Learning. In ‘2021 Digit. Image Comput. Tech. Appl. DICTA’, Gold Coast, Australia. pp. 1–8. (IEEE: Gold Coast, Australia) 10.1109/DICTA52665.2021.9647174

Belenguer-Plomer MA, Tanase MA, Fernandez-Carrillo A, Chuvieco E (2019) Burned area detection and mapping using Sentinel-1 backscatter coefficient and thermal anomalies. Remote Sensing of Environment 233, 111345.
| Crossref | Google Scholar |

Belenguer-Plomer MA, Tanase MA, Chuvieco E, Bovolo F (2021) CNN-based burned area mapping using radar and optical data. Remote Sensing of Environment 260, 112468.
| Crossref | Google Scholar |

Chen Y, He X, Xu J, Zhang R, Lu Y (2020) Scattering Feature Set Optimization and Polarimetric SAR Classification Using Object-Oriented RF-SFS Algorithm in Coastal Wetlands. Remote Sensing 12, 407.
| Crossref | Google Scholar |

Dixon DJ, Callow JN, Duncan JMA, Setterfield SA, Pauli N (2022) Regional-scale fire severity mapping of Eucalyptus forests with the Landsat archive. Remote Sensing of Environment 270, 112863.
| Crossref | Google Scholar |

Dostálová A, Lang M, Ivanovs J, Waser LT, Wagner W (2021) European Wide Forest Classification Based on Sentinel-1 Data. Remote Sensing 13, 337.
| Crossref | Google Scholar |

Du S, Guo Z, Wang W, Guo L, Nie J (2016) A comparative study of the segmentation of weighted aggregation and multiresolution segmentation. GIScience & Remote Sensing 53, 651-670.
| Crossref | Google Scholar |

Dufera AG, Liu T, Xu J (2023) Regression models of Pearson correlation coefficient. Statistical Theory and Related Fields 7, 97-106.
| Crossref | Google Scholar |

Foroughnia F, Alfieri SM, Menenti M, Lindenbergh R (2022) Evaluation of SAR and Optical Data for Flood Delineation Using Supervised and Unsupervised Classification. Remote Sensing 14, 3718.
| Crossref | Google Scholar |

Fu B, He X, Yao H, Liang Y, Deng T, He H, Fan D, Lan G, He W (2022) Comparison of RFE-DL and stacking ensemble learning algorithms for classifying mangrove species on UAV multispectral images. International Journal of Applied Earth Observation and Geoinformation 112, 102890.
| Crossref | Google Scholar |

Gibson R, Danaher T, Hehir W, Collins L (2020) A remote sensing approach to mapping fire severity in south-eastern Australia using sentinel 2 and random forest. Remote Sensing of Environment 240, 111702.
| Crossref | Google Scholar |

Homer C, Dewitz J, Yang L, Jin S, Danielson P, Coulston J, Herold N, Wickham J, Megown K (2015) Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. Photogrammetric Engineering & Remote Sensing 81, 345-354.
| Google Scholar |

Kalogirou V, Ferrazzoli P, Della Vecchia A, Foumelis M (2014) On the SAR Backscatter of Burned Forests: A Model-Based Study in C-Band, Over Burned Pine Canopies. IEEE Transactions on Geoscience and Remote Sensing 52, 6205-6215.
| Crossref | Google Scholar |

Lasaponara R, Proto AM, Aromando A, Cardettini G, Varela V, Danese M (2020) On the Mapping of Burned Areas and Burn Severity Using Self Organizing Map and Sentinel-2 Data. IEEE Geoscience and Remote Sensing Letters 17, 854-858.
| Crossref | Google Scholar |

Luo C, Qi B, Liu H, Guo D, Lu L, Fu Q, Shao Y (2021) Using Time Series Sentinel-1 Images for Object-Oriented Crop Classification in Google Earth Engine. Remote Sensing 13, 561.
| Crossref | Google Scholar |

Mishra D, Pathak G, Singh BP, Mohit , Sihag P, Rajeev , Singh S (2023) Crop classification by using dual-pol SAR vegetation indices derived from Sentinel-1 SAR-C data. Environmental Monitoring and Assessment 195, 115.
| Crossref | Google Scholar | PubMed |

Pinto MM, Trigo RM, Trigo IF, DaCamara CC (2021) A Practical Method for High-Resolution Burned Area Monitoring Using Sentinel-2 and VIIRS. Remote Sensing 13, 1608.
| Crossref | Google Scholar |

Qu J, Qiu X, Ding C, Lei B (2021) Unsupervised Classification of Polarimetric SAR Image Based on Geodesic Distance and Non-Gaussian Distribution Feature. Sensors 21, 1317.
| Crossref | Google Scholar | PubMed |

Roy DP, Huang H, Boschetti L, Giglio L, Yan L, Zhang HH, Li Z (2019) Landsat-8 and Sentinel-2 burned area mapping - A combined sensor multi-temporal change detection approach. Remote Sensing of Environment 231, 111254.
| Crossref | Google Scholar |

Shama A, Zhang R, Zhan R, Wang T, Xie L, Bao X, Lv J (2023) A Burned Area Extracting Method Using Polarization and Texture Feature of Sentinel-1A Images. IEEE Geoscience and Remote Sensing Letters 20, 1-5.
| Crossref | Google Scholar |

Shiraishi T, Hirata R, Hirano T (2021) New Inventories of Global Carbon Dioxide Emissions through Biomass Burning in 2001–2020. Remote Sensing 13, 1914.
| Crossref | Google Scholar |

Sinaga KP, Yang M-S (2020) Unsupervised K-Means Clustering Algorithm. IEEE Access 8, 80716-80727.
| Crossref | Google Scholar |

Sismanis M, Chadoulis R-T, Manakos I, Drosou A (2023) An Unsupervised Burned Area Mapping Approach Using Sentinel-2 Images. Land 12, 379.
| Crossref | Google Scholar |

Stroppiana D, Azar R, Calò F, Pepe A, Imperatore P, Boschetti M, Silva J, Brivio P, Lanari R (2015) Integration of Optical and SAR Data for Burned Area Mapping in Mediterranean Regions. Remote Sensing 7, 1320-1345.
| Crossref | Google Scholar |

Wei J, Zhang Y, Wu H, Cui B (2018) The Automatic Detection of Fire Scar in Alaska using Multi-Temporal PALSAR Polarimetric SAR Data. Canadian Journal of Remote Sensing 44, 447-461.
| Crossref | Google Scholar |

West RD, LaBruyere Iii TE, Skryzalin J, Simonson KM, Hansen RL, Van Benthem MH (2019) Polarimetric SAR Image Terrain Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, 4467-4485.
| Crossref | Google Scholar |

Zhang P, Nascetti A, Ban Y, Gong M (2019) An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data. ISPRS Journal of Photogrammetry and Remote Sensing 158, 50-62.
| Crossref | Google Scholar |

Zhang X, Xu J, Chen Y, Xu K, Wang D (2021) Coastal Wetland Classification with GF-3 Polarimetric SAR Imagery by Using Object-Oriented Random Forest Algorithm. Sensors 21, 3395.
| Crossref | Google Scholar | PubMed |

Zhang C, Gao G, Zhang L, Chen C, Gao S, Yao L, Bai Q, Gou S (2022) A novel full-polarization SAR image ship detector based on scattering mechanisms and wave polarization anisotropy. ISPRS Journal of Photogrammetry and Remote Sensing 190, 129-143.
| Crossref | Google Scholar |

Zhang D, Ying C, Wu L, Meng Z, Wang X, Ma Y (2023) Using Time Series Sentinel Images for Object-Oriented Crop Extraction of Planting Structure in the Google Earth Engine. Agronomy 13, 2350.
| Crossref | Google Scholar |