Skip to content

Commit

Permalink
Initial files upload from zip
Browse files Browse the repository at this point in the history
  • Loading branch information
ctzivana authored Apr 29, 2024
0 parents commit f791723
Show file tree
Hide file tree
Showing 22 changed files with 1,730 additions and 0 deletions.
80 changes: 80 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
![LOGO](https://github.com/DeepWave-Kaust/Meta-Processing/blob/main/asset/logo.jpg)

Reproducible material for Meta-Processing: A robust framework for multi-tasks seismic processing - Shijun Cheng, Randy Harsuko, Tariq Alkhalifah.

# Project structure
This repository is organized as follows:

* :open_file_folder: **metaprocessing**: python code containing routines for Meta-Processing, which include two parts: meta-train and meta-test;
* :open_file_folder: **asset**: folder containing logo;
* :open_file_folder: **data**: folder to store dataset;
* :open_file_folder: **results**: folder to store meta-initialization neural network model;
* :open_file_folder: **scripts**: set of python scripts for reproducing the meta-train and meta-test examples


## Supplementary files
To ensure reproducibility, we provide the the data set for meta-train and meta-test stages, and the meta-initialization model for various seismic processing. In meta-train stage, we just include **Note:** If you wish to train the models from random initialization, please **do not** download and copy the meta-initialization models.

* **Meta-train data set**
Download the meta-train data set [here](https://). Then, use `unzip` to extract the contents to `meta_train_dataset/`.

* **Meta-test data set**
Download the meta-test data set [here](https://). Then, use `unzip` to extract the contents to `meta_test_dataset/`.

* **Meta-initialization model**
Download the meta-initialization neural network model [here](https://). Then, extract the contents to `meta_checkpoints/`.

## Getting started :space_invader: :robot:
To ensure reproducibility of the results, we suggest using the `environment.yml` file when creating an environment.

Simply run:
```
./install_env.sh
```
It will take some time, if at the end you see the word `Done!` on your terminal you are ready to go. Activate the environment by typing:
```
conda activate meta-processing
```

After that you can simply install your package:
```
pip install .
```
or in developer mode:
```
pip install -e .
```

## Scripts :page_facing_up:
When you have downloaded the supplementary files and have installed the environment, you can entry the scripts file folder and run demo. We provide two scripts which are responsible for meta-train and meta-test examples.

For meta-train, you can directly run:
```
sh run_meta_train.sh
```
**Note:** When you run demo for meta-train, you need open the `metaprocessing/meta_train/train.py` file to modify the meta-train dataset file folder accordingly.

For meta-test, you can directly run:
```
sh run_meta_test.sh
```
**Note:** When you run demo for meta-test, you need open the `metaprocessing/meta_test/train.py` file to modify the meta-test dataset file folder accordingly, which depends on the seismic processing task you want to test. Meanwhile, you need open the `metaprocessing/meta_test/train.py` file to specify the path for meta initialization model. Here, we have provided a meta-initialization model in supplementary file, you can directly load meta-initialization model to perform meta-test.

If you need to compare with a randomly initialized network, you can comment out lines 63 and 64 in the `metaprocessing/meta_test/train.py` file as follows
```
# net.load_state_dict(torch.load(dir_load, map_location=device))
# print(f'Model loaded from {dir_load}')
```
and then run:
```
sh run_meta_test.sh
```

**Note:** We emphasize that the training logs (for both meta-train and meta-test) is saved in the `runs/` file folder. You can use the `tensorboard --logdir=./` or extract the log to view the changes of the metrics as a function of epoch.

**Disclaimer:** All experiments have been carried on a Intel(R) Xeon(R) CPU @ 2.10GHz equipped with a single NVIDIA GEForce A100 GPU. Different environment
configurations may be required for different combinations of workstation and GPU. Due to the high memory consumption during the meta training phase, if your graphics card does not support large batch training, please reduce the configuration value of args (`args.k_spt` and `args.k_qry`) in the `metaprocessing/meta_train/train.py` file.

## Cite us
DW0017 - Cheng et al. (2023) Meta-Processing: A robust framework for multi-tasks seismic processing.

Binary file added asset/logo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file added asset/placeholder
Empty file.
Empty file added data/placeholder
Empty file.
23 changes: 23 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: meta-processing
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- cudatoolkit=11.6.0
- numpy=1.23.5
- python=3.10.9
- python_abi=3.10=3_cp310
- pytorch=1.12.0=py3.10_cuda11.6_cudnn8.3.2_0
- matplotlib
- scipy=1.10.0
- torchaudio=0.12.0
- torchvision=0.13.0
- tqdm=4.64.1
- pip=22.3.1
- pip:
- tensorboard==2.12.0
- tensorboard-data-server==0.7.0
- tensorboard-plugin-wit==1.8.1
prefix: /home/chengs/miniconda3

22 changes: 22 additions & 0 deletions install_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash
#
# Installer for package
#
# Run: ./install_env.sh
#

echo 'Creating Package environment'

# create conda env
conda env create -f environment.yml
source ~/miniconda3/etc/profile.d/conda.sh
conda activate meta-processing
conda env list
echo 'Created and activated environment:' $(which python)

# check cupy works as expected
echo 'Checking cupy version and running a command...'
python -c 'import torch; print(torch.__version__); print(torch.cuda.get_device_name(torch.cuda.current_device())); print(torch.ones(10).to("cuda:0"))'

echo 'Done!'

43 changes: 43 additions & 0 deletions metaprocessing/meta_test/dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
import torch.utils.data as data
import os
import os.path
import torch
import re
from glob import glob
from os.path import splitext
from os import listdir
import scipy.io as scio
import numpy as np

class Basicdataset(data.Dataset):
def __init__(self, dir):
self.dir = dir

self.ids = strsort([splitext(file)[0] for file in listdir(self.dir)
if not file.startswith('.')])

def __getitem__(self, index):
idx_file = self.ids[index]

file = glob(self.dir + idx_file + '.*')

dict = scio.loadmat(file[0])
input = dict['input']
label = dict['label']

return {
'input': torch.from_numpy(input).unsqueeze(0).type(torch.FloatTensor),
'label': torch.from_numpy(label).unsqueeze(0).type(torch.FloatTensor)
}

def __len__(self):
return len(self.ids)

def sort_key(s):
tail = s.split('\\')[-1]
c = re.findall('\d+', tail)[0]
return int(c)

def strsort(alist):
alist.sort(key=sort_key)
return alist
34 changes: 34 additions & 0 deletions metaprocessing/meta_test/eval.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import torch
from tqdm import tqdm
import torch.nn as nn
from msssimLoss import MSSSIM

def eval_net(net, loader, device):

net.eval()

n_val = len(loader)
loss1 = 0
loss2 = 0

criterion1 = nn.MSELoss()
criterion2 = MSSSIM()

with tqdm(total=n_val, desc='Validation round', unit='batch', leave=False) as pbar:
for batch in loader:
inputs, labels = batch['input'], batch['label']
inputs = inputs.to(device = device, dtype = torch.float32)
labels = labels.to(device = device, dtype = torch.float32)

with torch.no_grad():
outputs_pred = net(inputs)

loss1 += criterion1(outputs_pred, labels).item()
loss2 += criterion2(outputs_pred, labels).item()

pbar.set_postfix(**{'loss1 (batch)': loss1, 'loss2_msssim (batch)': loss2})

pbar.update(inputs.shape[0])

net.train()
return loss1 / n_val, loss2 / n_val
Loading

0 comments on commit f791723

Please sign in to comment.