Skip to content

jayathungek/owlnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Barn Owl Vocal Individuality (VI) Demo

This codebase explores patterns in large amounts of barn owl audio. We use spectrogram analysis and a zero-crossing algorithm to isolate individual owl chirps. Since the calls are naturally spaced—possibly due to a feeding negotiation tactic—it’s easy to segment them without overlap. The result is a large dataset of distinct chirps, which can be used for clustering and classification. The repo includes code for data loading, feature extraction, and visualisation to analyse vocalisation patterns. This project was featured in an AI for sustainability conference (CAIREES 2025).

Run it yourself

Please ensure that you have Miniconda and git installed on your system before starting.

  1. Getting files: Navigate to your working directory. Then:

    $ git clone https://github.com/jayathungek/owlnet.git
    $ cd owlnet
  2. Data and checkpoints: Create a folder named owl_data in the root directory of the project (i.e. owlnet). This is where your audio files should go. The program looks for *.wav files in this directory to build its dataset. If you have a model checkpoint, put it in the model_checkpoints folder.

  3. Installing dependencies

    $ conda env create -f environment.yml
    $ conda activate owlnet
  4. Running the Jupyter notebook:

    (owlnet)$ jupyter notebook

    This will open a browser window from which you can select the notebook you wish to run. If you just want to run the demo, this is owlnet_demo.ipynb. Run all the cells in order.

  5. Exporting to CSV: Navigate to the root directory of the project and run python -m owlnet.cli export, supplying a filename for the CSV to be exported:

    usage: python -m owlnet.cli export [-h] [-c CONFIG] filename
    positional arguments:
      filename: name to use for saving the CSV file. Will be saved to the exports/ dir
    
    options:
      -h, --help
      -c CONFIG, --config CONFIG: the path to a config.json file

Optional: Training from scratch

If you would like to train your own model with different data or a modified architecture, please navigate to the project's root directory and run python -m owlnet.cli train, supplying a name for the model you are about to train:

usage: python -m owlnet.cli train [-h] [-c CONFIG] model_name

positional arguments:
  model_name

options:
  -h, --help
  -c CONFIG, --config CONFIG: the path to a config.json

You may also want to experiment with using the version of the model that includes attention layers. To do this, please pass settings/attn_config.json to the training and/or export script:

(owlnet)$ python -m owlnet.cli train -c settings/attn_config.json my_new_model 
(owlnet)$ python -m owlnet.cli export -c settings/attn_config.json outfile

Video card

Typically, a video card (NVIDA) is required for training and inference. However if this is not possible on your system, please set the device variable in settings/config.json to cpu instead of cuda. This is very much NOT recommended -- the demo will take ages to run and even longer to train.

Data and checkpoints

Follow the links below to get access to files that are needed to run the simulation

Description Link Notes
Model checkpoint model.v1_3584.datapoints_105.epochs.pth Version presented at CAIREES 2025
Model checkpoint (with attention) model.attn.v4_3584.datapoints_110.epochs.pth Experimental
Owl dataset Email me to get access -

About

Demo code for Vocal Individuality in barn owls (CAIRESS 2025)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published