snapbridge is a Rust CLI for managing Proxmox snapshots on NetApp ONTAP-backed storage.
It ports some behavior from pve-ontap-snapshot (https://github.com/credativ/pve-ontap-snapshot) into a single binary with two operator-facing backends:
nasfor ONTAP-backed NAS/NFS storagesanfor ONTAP-backed iSCSI/LVM storage
Current implementation covers:
snapbridge nas vm createsnapbridge nas storage create|restore|delete|list|mount|unmount|showsnapbridge san storage create|restore|delete|list|show
Not included yet:
- retention cleanup helpers
- cron packaging
- live integration coverage against a real Proxmox/ONTAP environment
cargo buildRun the CLI:
cargo run -- --helpBuild a local Debian package when cargo-deb is installed:
cargo install cargo-deb --version 3.6.3 --locked
cargo deb -- --lockedRelease builds are automated by GitHub Actions. Pushing a version tag such as v0.2.0 runs tests, lints, builds amd64 and arm64 Debian packages, generates SHA256SUMS, and uploads the files to the GitHub Release for that tag.
Install the latest release on Debian-compatible amd64 or arm64 systems:
curl -fsSL https://raw.githubusercontent.com/abdoufermat5/snapbridge/main/install.sh | bashInstall a specific release:
curl -fsSL https://raw.githubusercontent.com/abdoufermat5/snapbridge/main/install.sh | SNAPBRIDGE_VERSION=v0.2.0 bashThe installer downloads the matching .deb package from https://github.com/abdoufermat5/snapbridge/releases, verifies it against the release SHA256SUMS, and installs it with apt-get or dpkg.
Top-level help:
snapbridge --helpMain commands:
snapbridge nas vm create --vm 100 --suspend
snapbridge nas storage create --storage NAS01 --fsfreeze
snapbridge nas storage list
snapbridge nas storage list --storage NAS01
snapbridge nas storage list --output json
snapbridge nas storage mount --storage NAS01 --snapshot proxmox_snapshot_2026-04-14_02:00:00+0200
snapbridge san storage create --storage SAN01 --fsfreeze
snapbridge san storage list
snapbridge san storage list --storage SAN01
snapbridge san storage list --output json
snapbridge san storage restore --storage SAN01 --snapshot proxmox_snapshot_2026-04-14_02:15:00+0200
snapbridge san storage show --storage SAN01
snapbridge san storage show --storage SAN01 --output json
snapbridge schedule list
snapbridge schedule run daily
snapbridge schedule create daily
snapbridge schedule delete dailyHuman-readable table output is the default:
snapbridge nas storage list
snapbridge san storage listUse --output json when piping to scripts or other tools:
snapbridge nas storage list --output json
snapbridge san storage show --storage SAN01 --output jsonnas storage list and san storage list list all configured storage entries for their backend when --storage is omitted. Add --storage <id> to limit the output to one configured storage.
The default log level is info, so snapshot creation prints progress as it runs. Progress logs use a shared format:
[INFO] [san snapshot:SAN01] start: starting SAN storage snapshot
[INFO] [san snapshot:SAN01] step: discovering VMs that use storage `SAN01` before fsfreeze
[INFO] [san snapshot:SAN01] step: creating ONTAP snapshot `proxmox_snapshot_...` on volume `san_vol1`
[INFO] [san snapshot:SAN01] done: snapshot `proxmox_snapshot_...` created
Use --log-level warn for quieter output or --log-level debug to include HTTP response debug logs:
snapbridge --log-level warn nas storage create --storage NAS01
snapbridge --log-level debug san storage create --storage SAN01 --fsfreezeThe installed package reads /etc/snapbridge/snapbridge.toml by default. The Debian package installs a copy of snapbridge.example.toml there with mode 600.
Edit it after installation:
sudo nano /etc/snapbridge/snapbridge.toml
sudo chmod 600 /etc/snapbridge/snapbridge.tomlOverride the config path with --config <path> when needed.
Example:
[proxmox]
host = "pve.example.local"
user = "root@pam"
token_name = "snapbridge"
token_value = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
verify_ssl = false
timezone = "Europe/Paris"
[storage.NAS01]
backend = "nas"
ontap_host = "ontap-mgmt.example.local"
ontap_user = "admin"
ontap_password = "secret"
verify_ssl = false
[storage.SAN01]
backend = "san"
ontap_host = "ontap-mgmt.example.local"
ontap_user = "admin"
ontap_password = "secret"
verify_ssl = false
volume_name = "san_vol1"
lun_path = "/vol/san_vol1/lun0"
ssh_user = "root"
[schedule.daily]
storages = ["NAS01", "SAN01"]
fsfreeze = true
keep_last = 7
max_age = "30d"proxmox.hostmay be a hostname or full URL. If you pass a hostname,https://<host>:8006is assumed.- NAS entries do not need an ONTAP volume name in config. The volume is derived from the Proxmox storage export path.
- SAN entries require
volume_nameandlun_path. - Schedule entries target explicit storage IDs.
fsfreezedefaults tofalse. - Schedule retention supports
keep_last,max_age, or both.max_ageacceptss,m,h,d, andwsuffixes, for example30d. - SAN restore uses SSH to each Proxmox node and runs:
iscsiadm -m session --rescanpvscan --cache
Schedules live in /etc/snapbridge/snapbridge.toml under [schedule.<name>].
Run a schedule manually:
snapbridge schedule run dailyRun only one phase:
snapbridge schedule create daily
snapbridge schedule delete dailyThe Debian package installs reusable systemd units:
/lib/systemd/system/snapbridge-schedule@.service/lib/systemd/system/snapbridge-schedule@.timer
Enable the packaged daily timer for a schedule named daily:
sudo systemctl enable --now snapbridge-schedule@daily.timer
sudo systemctl status snapbridge-schedule@daily.timer
journalctl -u snapbridge-schedule@daily.serviceThe timer runs at 02:00 by default. Override timing with a systemd drop-in for each schedule:
sudo systemctl edit snapbridge-schedule@daily.timerExample override:
[Timer]
OnCalendar=
OnCalendar=*-*-* 03:30:00- NAS VM snapshots use ONTAP file cloning for eligible VM disks.
- NAS storage
create --fsfreezecreates temporary Proxmox VM snapshots before taking the ONTAP snapshot, then removes them. - SAN storage
create --fsfreezeuses the QEMU guest agent directly withfsfreeze-freeze/fsfreeze-thaw. - Snapshot creation logs each major phase: config/backend checks, VM discovery, freeze/snapshot/thaw or temporary snapshot cleanup, and final success/failure.
- Scheduled deletion only removes snapshots whose names start with
proxmox_snapshot_and contain a parseable Snapbridge timestamp. - NAS
mountcreates a FlexClone volume and registers<storage>-CLONEin Proxmox. - VM disk snapshots still keep the same known limitation as the Python version: Proxmox does not automatically rescan and display them as attached snapshots.
Key folders:
src/
clients/
ontap/
proxmox/
workflows/
nas/
san/
Rough responsibilities:
clients/contains the API traits plus the reqwest-backed HTTP implementationsdisplay.rscontains shared table and JSON output renderinglogger.rscontains shared CLI logging and progress messagesworkflows/contains the NAS and SAN behavior layersconfig.rs,models.rs,util.rs, anderror.rscontain shared support code
Debian package metadata lives in Cargo.toml under [package.metadata.deb].
The generated package installs:
/usr/bin/snapbridge/etc/snapbridge/snapbridge.toml/lib/systemd/system/snapbridge-schedule@.service/lib/systemd/system/snapbridge-schedule@.timer/usr/share/doc/snapbridge/README.md/usr/share/doc/snapbridge/examples/snapbridge.toml
The release workflow is .github/workflows/debian-release.yml. It only runs for tags matching v*, requires the tag version to match Cargo.toml, and builds on Ubuntu 22.04 to keep the generated package compatible with Debian 12 / Proxmox 8 era libc6 versions.
The root install.sh script is designed for curl | bash installation from GitHub Releases. Set SNAPBRIDGE_VERSION to pin a specific tag; omit it to install the latest release.
Run the test suite:
cargo testRun the same lint gate used by CI:
cargo clippy --locked --all-targets -- -D warningsAt the moment the tests are mock-driven unit/integration-style checks around workflow behavior. They lock the current command flow and refactor safety, but they do not replace a real lab validation against your Proxmox and ONTAP APIs.