Skip to content

Predicting residential structure burn status post-wildfire from aerial images

Notifications You must be signed in to change notification settings

drewgjerstad/wildfire-structure-damage-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predicting Residential Structure Burn Status Post-Wildfire from Aerial Images

CSCI 4521 Final Project (University of Minnesota, Fall 2025)

Drew Gjerstad and Arlan Hegenbarth

Contents

Project Execution Instructions

First, create the environment using the command below. This will install all of the modules required to run the project scripts. Note that this will install the PyTorch dependencies with CUDA 11.8 support.

conda env create -f environment.yml

This project can be run using either CPUs or GPUs, depending on accessibility. Note that if you are using GPUs on a system similar to the Minnesota Supercomputing Institute which uses the Slurm Workload Manager system, see the msi_batch.sh script for an example for running this project on a high performance computing system. The instructions that follow are for running the project manually (i.e., not using Slurm).

Once the environment has been created, activate it.

conda activate main-env

To obtain the images that align with the labels and metadata in the data directory, execute the following commands from the project root in terminal.

# Download
curl -L \
https://github.com/drewgjerstad/wildfire-structure-damage-detection/releases/download/v1.0/wildfire_structure_damage_images_v1.tar.gz \
-o images.tar.gz

# Extract
tar -xzf images.tar.gz

# Delete Extract
rm images.tar.gz

This will extract image data to data/images/.

Next, still from the project root, run the following commands to execute the main (training and evaluation) and analysis scripts--depending on whether or not you intend to take advantage of GPUs.

# Run w/ GPUs
CUDA_VISIBLE_DEVICES=0 python3 -m src.main
CUDA_VISIBLE_DEVICES=0 python3 -m src.analysis

# Run w/o GPUs
python3 -m src.main
python3 -m src.analysis

After the scripts finish running, several exports will be created in the exports directory. You can find analysis plots generated for each model in the sub-directories cnn_plots, knn_plots, and resnet_plots. You will also find pickle files containing quantitative results and exported models alongside .pt files from the trained PyTorch models.

Introduction

During and immediately after wildfires, access on the ground is dangerous and typically heavily limited to first responders and other public officials. This creates many challenges for attempts to assess the damage perpetrated by the fire. Fortunately, once the smoke has cleared we can attempt to take advantage of imagery captured by airplanes and satellites to aid in assessing the damage. In particular, to sufficiently quantify and locate damage, it is necessary to evaluate each individual structure that may have been impacted by the fire. This is of major interest to several groups including property owners, public officials, utility companies, and insurance companies who all have different motives for evaluating property damage. Therefore, the goal of our work here is to apply a machine learning architecture to predict the burn status of residential structures post-wildfire from aerial images so as to provide quick and automated data to those who can benefit from it.

About the Data

The data used in this project was collected following the Eaton Wildfire that occurred outside of Los Angeles, California in January 2025. We combined two datasets that each focused on different aspects of the impacted regions. That is, one of the datasets included labeled locations (latitude/longitude) which represent structures labeled based on the post-wildfire damage assessment. This dataset was cleaned by removing all non-residential buildings and reducing the number of labels to two distinct categories: not damaged and damaged. The other dataset contained post-fire aerial imagery provided by the National Oceanic and Atmospheric Administration (NOAA) composed of multiple aerial images of the impacted regions, primarily collected on January 28, 2025. To incorporate the aerial imagery dataset into our machine learning dataset, we used ArcGIS Pro to extract a $500\times 500$ pixel image centered at each data point from the dataset that contained the locations of residential structures. It is important to keep in mind that not all residential buildings have the same footprint so some of the images may not contain the entire structure whereas others may contain parts of other structures and serves as a potential limitation of our work: how to subset the aerial imagery for optimal performance. An example of an image from our dataset is shown below.

Example aerial image of post-wildfire
residential structure

Related Work

The problem of using machine learning techniques to detect post-fire damage using aerial imagery is fairly common application. In 2019, Alten tested out random forests and deep neural networks for identifying fire damage. Galanis in 2021 and Alican in 2024 used a convolutional neural network on the task. In 2023, Kang approached the problem with a deep learning approach and in 2025, Esparza attempted to use general-purpose large-language models (LLMs) to classify fire damage. One of the most common components of these related works is the application of image segmentation to identify structures, often accomplished using pre-fire aerial imagery. However, we will not be attempting to incorporate this into our work and will instead proceed with the process described above: using a standard image size centered around the recorded location of the structure. Nonetheless, previous works such as these have been able to achieve accuracy and precision scores above 97% suggesting that using machine learning can be a successful approach to identifying post-wildfire structure damage.

Models and Model Architecture Details

In this project, we trained and evaluated three models: a classical K-Nearest Neighbors (KNN) classifier, a convolutional neural network (CNN), and a fine-tuned pre-trained ResNet18 model.

Nearest Neighbors Classifier

For our KNN model, we first converted the $100\times 100$ images into grayscale and then performed PCA dimensionality reduction with enough features to explain 97% of the variance. Using grid search cross-validation, we tuned the model's hyperparameters including the distance metric, number of neighbors, and weighting scheme. The best hyperparameters are as follows:

  • Distance Metric (metric): euclidean
  • Number of Neighbors (n_neighbors): 15
  • Weighting Scheme (weights): uniform

These hyperparameters were used to train our final KNN model on the entire training set and was evaluated on the test set via metrics including accuracy, precision, recall, F1 score, area-under-the-curve (AUC) score, and ROC curves.

Convolutional Neural Network (CNN)

For our CNN model, we use the $100\times 100$ images in their original RGB form (i.e., no grayscale conversion) and without any dimensionality reduction. Thus, the dimension of each image is $3\times 100\times 100$ and note that this is the shape used in PyTorch's convention and differs from how we often think about image dimensionality (which is used in the architecture diagram below). Our CNN model is made up of three convolutional blocks, an adaptive average pooling block, and a classification block. Each convolutional block is made up of a convolutional layer, batch normalization, ReLU activation, max pooling, and a dropout layer. Each consecutive convolutional block downsamples the image using a $3\times 3$ kernel while incorporating twice the number of channels as the previous convolutional block to extract additional features. After passing through these blocks, we use adaptive average pooling to perform dimensionality reduction before passing the flattened features into the classification block. The network model is trained across 20 epochs with a batch size of 32 and a learning rate of $\ell=0.001$. Then, similar to the KNN model, we evaluate its performance using accuracy, precision, recall, F1 score, area-under-the-curve (AUC) score, and ROC curves.

CNN architecture

Fine-Tuned Pre-Trained ResNet18 Model

For our fine-tuned pre-trained model, we follow a similar procedure with regard to the size of the images used and without any grayscale conversion nor any dimensionality reduction. The pre-trained ResNet18 model is designed as a foundational model for image classification and was pre-trained on the ImageNet dataset (He et al., 2015). It has been shown that models trained on ImageNet data are highly transferable to other tasks in the same domain (Kornblith et al., 2019). Furthermore, we selected this model since it has been used in works to identify active fires (Alican et al., 2025). For training, we used a two-phase approach: first fine-tuning the final fully-connected layer alone and then fine-tuning the entire network (Yosinski et al., 2014). We used the same hyperparameters for training here as we did for the CNN model. Then, we evaluate its performance using accuracy, precision, recall, F1 score, area-under-the-curve (AUC) score, and ROC curves.

Results

In this section, we provide our test-set results for each of our three models. Based on the various tables and figures below, we observed that the KNN model performed very poorly while the network models performed very well on this task with the CNN model performing slightly better than the fine-tuned ResNet18 model. We hypothesize that the two network models were able to extract useful features of each image that aided in its predictive power for identifying which images showed a residential structure that was not damaged or damaged. We also note that one of our additional analyses below uses the original three category labels (not damaged, damaged, and destroyed) and shows that the main error point across the models is identifying damaged structures. We further hypothesize that this is due to the fact that many of the damaged structures incurred damage from the wildfire that is not visible in the aerial imagery (i.e., damage to siding, windows, etc.). Furthermore, we also provide examples of errors by each model and high-confidence predictions by each model.

The table below provides test-set performance metrics for each of our models.

Accuracy Precision Recall F1 Score AUC Score
KNN 0.4349 0.5399 0.4349 0.2713 0.5145
CNN 0.9430 0.9465 0.9430 0.9432 0.9755
ResNet18 0.9413 0.9433 0.9413 0.9415 0.9746

The table below provides test-set error counts for our CNN model comparing performance on the binary labels (not damaged or damaged) versus the original three category labels (not damaged, damaged, or destroyed). Note that in our binary labels, "damaged" was used to label houses that were originally labeled as "damaged" or "destroyed". Following the table, we provide a representative image of a damaged residential structure from the dataset showing that the damage incurred from the wildfire is not visible from the aerial imagery.

Not Damaged Damaged
Not Damaged 743 15
Damaged 78 24
Destroyed 8 904

Example damaged residential structure

The figure below shows the test-set ROC curves for each of our models.

Test-set ROC curves

The figure below shows the test-set confusion matrices for each of our models.

Test-set confusion matrices

Error Visualization

The figures below show visualized errors made by each our KNN, CNN, and fine-tuned ResNet models, respectively.

KNN model errors


CNN model errors


ResNet model errors

High-Confidence Predictions

The figures below show high-confidence predictions made by each of our KNN, CNN, and fine-tuned ResNet models, respectively.

KNN model confident predictions


CNN model confident predictions


ResNet model confident predictions

References

Poster

This is our poster from our presentation session on Saturday, December 13, 2025. The full size (PNG) version can be found here.

Project poster from presentation session

About

Predicting residential structure burn status post-wildfire from aerial images

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •