Satellite imagery is a rich source of information, and the accurate segmentation of water bodies is crucial for understanding environmental patterns and changes over time. This project aims to provide a reliable and efficient tool for extracting water regions from raw satellite images.
This repository supports two workflows:
- Library usage: install with
pipand run inference (pretrained weights downloaded on-demand). - Training workflow: train your own models using the included preprocessing + training pipeline.
The dataset for this project is gotten here kaggle.com. It consists of jpeg images of water bodies taken by satellites and their mask. More details of the dataset is provided on the website.
pip install sat-waterTo run inference/training you must install the TensorFlow extras:
pip install "sat-water[tf]"git clone https://github.com/busayojee/sat-water.git
cd sat-water
pip install -e .Note:
sat-watersetsTF_USE_LEGACY_KERAS=1andSM_FRAMEWORK=tf.kerasby default at import time to keepsegmentation-modelscompatible.
Pretrained weights are hosted on Hugging Face and downloaded at inference time with SHA256 integrity verification.
Default weights repo:
busayojee/sat-water-weights
Override weights source:
export SATWATER_WEIGHTS_REPO="busayojee/sat-water-weights"
export SATWATER_WEIGHTS_REV="main"This project was trained on 2 models. The UNET with no backbone and the UNET with a RESNET34 backbone of which 2 different models were trained on different sizes of images and also different hyperparameters.
| Model key | Architecture | Input size | Notes |
|---|---|---|---|
resnet34_256 |
UNet + ResNet34 backbone | 256×256 | Best speed/quality tradeoff |
resnet34_512 |
UNet + ResNet34 backbone | 512×512 | Higher-res boundaries; slower |
unet |
UNet (no backbone) | 128×128 | Currently unavailable in weights repo |
from satwater.inference import segment_image
res = segment_image(
"path/to/image.jpg",
model="resnet34_512", # or "resnet34_256"
return_overlay=True,
show=False,
)
mask = res.masks["resnet34_512"] # (H, W, 1)
overlay = res.overlays["resnet34_512"] # (H, W, 3)segment_image(...) is the recommended entrypoint for package users.
image_path(str): path to an input image (.jpg,.png, etc.)model(str): one ofresnet34_256,resnet34_512(andunetonce available)return_overlay(bool): whether to return an overlay image (original image + blended water mask)show(bool): whether to display the result via matplotlib (useful in notebooks / local runs)
repo_id(str, optional): Hugging Face repo containing weights (defaults toSATWATER_WEIGHTS_REPO)revision(str, optional): branch / tag / commit (defaults toSATWATER_WEIGHTS_REV)save_dir(str | Path | None, optional): output directory (if supported in your local version).
If you want saving, you can always do it manually from the returned arrays (example below).
from PIL import Image
import numpy as np
Image.fromarray((mask.squeeze(-1) * 255).astype(np.uint8)).save("mask.png")
Image.fromarray(overlay).save("overlay.png")The plots below are from historical runs in this repository and are provided to show convergence behavior.
| UNet (baseline) | ResNet34-UNet (256×256) | ResNet34-UNet (512×512) |
|---|---|---|
![]() |
![]() |
![]() |
Qualitative predictions produced by the three models.
| UNet | ResNet34-UNet (256×256) | ResNet34-UNet (512×512) |
|---|---|---|
![]() |
![]() |
![]() |
Using all models to predict a single test instance.
| Test Image | Prediction |
|---|---|
![]() |
![]() |
Label overlay of the best prediction (ResNet34-UNet 512×512 in that run):
from satwater.preprocess import Preprocess
train_ds, val_ds, test_ds = Preprocess.data_load(
dataset_dir="path/to/dataset",
masks_dir="/Masks",
images_dir="/Images",
split=(0.7, 0.2, 0.1),
shape=(256, 256),
batch_size=16,
channels=3,
)from satwater.models import Unet
history = Unet.train(
train_ds,
val_ds,
shape=(128, 128, 3),
n_classes=2,
lr=1e-4,
loss=Unet.loss,
metrics=Unet.metrics,
name="unet",
)from satwater.models import BackboneModels
bm = BackboneModels("resnet34", train_ds, val_ds, test_ds, name="resnet34_256")
bm.build_model(n_classes=2, n=1, lr=1e-4)
history = bm.train()For a 512×512 run, load a second dataset with
shape=(512, 512)and use a different model name (e.g.resnet34_512) to keep artifacts separate.
To run inference for UNET
inference_u = Inference(model="path/to/model",name="unet")
inference_u.predict_ds(test_ds)
for RESNET 1 and 2
inference_r = Inference(model="path/to/model",name="resnet34")
inference_r.predict_ds(test_ds)
inference_r2 = Inference(model="path/to/model",name="resnet34(2)")
inference_r2.predict_ds(test_ds1)
For all 3 models together
models={"unet":"path/to/model1", "resnet34":"path/to/model2", "resnet34(2)":"path/to/model3"}
inference_multiple = Inference(model=models)
inference_multiple.predict_ds(test_ds)
If you included the scripts/ folder in your package/repo, you can run the scripts directly.
UNet:
python scripts/train.py --dataset path/to/dataset --image-folder /Images --mask-folder /Masks --shape 128,128,3 --batch-size 16 --split 0.2,0.1 --channels 3 --model unet --name unet --epochs 100 --lr 1e-4ResNet34-UNet (256):
python scripts/train.py --dataset path/to/dataset --image-folder /Images --mask-folder /Masks --shape 256,256,3 --batch-size 8 --split 0.2,0.1 --channels 3 --model resnet34 --name resnet34_256 --epochs 100 --lr 1e-4ResNet34-UNet (512):
python scripts/train.py --dataset path/to/dataset --image-folder /Images --mask-folder /Masks --shape 512,512,3 --batch-size 4 --split 0.2,0.1 --channels 3 --model resnet34(2) --name resnet34_512 --epochs 100 --lr 1e-4Single model:
python scripts/infer.py --image path/to/image.jpg --model path/to/model.keras --name unet --out predictionMultiple models:
python scripts/infer.py --image path/to/image.jpg --models "unet=path/to/unet.keras,resnet34=path/to/resnet34.keras,resnet34(2)=path/to/resnet34_2.keras" --out predictionexport HF_TOKEN="YOUR_HUGGINGFACE_TOKEN"
python scripts/weights.py --repo-id user/repo --hf-root weights --out-dir dist/weights --model unet=path/to/unet.keras@128,128,3 --model resnet34_256=path/to/resnet34_256.keras@256,256,3 --model resnet34_512=path/to/resnet34_512.keras@512,512,3Contributions are welcome — especially around:
- adding/refreshing pretrained weights (including UNet)
- improving inference UX (CLI, batch inference, better overlays)
- expanding tests and CI matrix
- model evaluation and benchmarking on additional datasets
- Fork the repo
- Create a feature branch:
git checkout -b feat/my-change
- Run checks locally:
pytest -q ruff check . ruff format .
- Open a pull request with a short summary + screenshots (if changing inference output)
If you’re reporting a bug, please include:
- OS + Python version
- TensorFlow version
- full traceback + a minimal repro snippet


.png)


.png)


