Skip to content

Commit

Permalink
update test code
Browse files Browse the repository at this point in the history
  • Loading branch information
haku-huang committed Jan 16, 2022
0 parents commit d465847
Show file tree
Hide file tree
Showing 69 changed files with 1,702 additions and 0 deletions.
Binary file added .DS_Store
Binary file not shown.
12 changes: 12 additions & 0 deletions .idea/ICME_model-78.iml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 6 additions & 0 deletions .idea/inspectionProfiles/profiles_settings.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions .idea/misc.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions .idea/modules.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

88 changes: 88 additions & 0 deletions .idea/workspace.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

32 changes: 32 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# HALDeR

this is the official code for the paper "HALDER: HIERARCHICAL ATTENTION-GUIDED LEARNING WITH
DETAIL-REFINEMENT FOR MULTI-EXPOSURE IMAGE FUSION"

## Environment Preparing
```
python 3.6
pytorch 1.7.0
visdom 0.1.8.9
dominate 2.6.0
```

### Testing

We provide some example images for testing in `./test_data/`
```
python predict.py
```


### Reference

If you find our work useful in your research please consider citing our paper:
```
@INPROCEEDINGS{9428192,
title={Halder: Hierarchical Attention-Guided Learning with Detail-Refinement for Multi-Exposure Image Fusion},
author={Liu, Jinyuan and Shang, Jingjie and Liu, Risheng and Fan, Xin},
booktitle={2021 IEEE International Conference on Multimedia and Expo (ICME)},
year={2021}
}
```
Binary file added checkpoints/HALDeR/latest_net_G_A.pth
Binary file not shown.
Binary file added checkpoints/HALDeR/latest_net_G_H.pth
Binary file not shown.
Binary file added checkpoints/HALDeR/latest_net_G_V.pth
Binary file not shown.
13 changes: 13 additions & 0 deletions checkpoints/HALDeR/loss_log.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
================ Training Loss (Tue Mar 30 15:03:50 2021) ================
================ Training Loss (Tue Mar 30 15:30:10 2021) ================
================ Training Loss (Tue Mar 30 18:57:21 2021) ================
================ Training Loss (Tue Mar 30 18:59:03 2021) ================
================ Training Loss (Tue Mar 30 19:01:14 2021) ================
================ Training Loss (Tue Mar 30 19:11:42 2021) ================
================ Training Loss (Tue Mar 30 19:14:32 2021) ================
================ Training Loss (Tue Mar 30 19:19:10 2021) ================
================ Training Loss (Tue Mar 30 19:22:50 2021) ================
================ Training Loss (Tue Mar 30 19:28:52 2021) ================
================ Training Loss (Tue Mar 30 19:32:07 2021) ================
================ Training Loss (Tue Mar 30 19:35:46 2021) ================
================ Training Loss (Tue Mar 30 19:48:27 2021) ================
83 changes: 83 additions & 0 deletions checkpoints/HALDeR/opt.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
------------ Options -------------
D_P_times2: False
IN_vgg: False
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: ./test_data
dataset_mode: test
display_id: 1
display_port: 8097
display_single_pane_ncols: 0
display_winsize: 256
fcn: 0
fineSize: 600
gpu_ids: [0]
high_times: 400
how_many: 50
hybrid_loss: False
identity: 0.0
input_linear: False
input_nc: 3
instance_norm: 0
isTrain: False
is_ca: False
is_haze: False
is_re: False
is_toushe: False
l1: 10.0
lambda_A: 10.0
lambda_B: 10.0
latent_norm: False
latent_threshold: False
lighten: False
linear: False
linear_add: False
loadSize: 286
low_times: 200
max_dataset_size: inf
model: single
multiply: False
nThreads: 0
n_layers_D: 3
n_layers_patchD: 3
name: HALDeR
ndf: 64
new_lr: False
ngf: 64
no_dropout: True
no_flip: False
no_vgg_instance: False
noise: 0
norm: instance
norm_attention: False
ntest: inf
output_nc: 3
patchD: False
patchD_3: 0
patchSize: 64
patch_vgg: False
phase: test
resize_or_crop: no
results_dir: ./results/
self_attention: False
serial_batches: False
skip: 1
syn_norm: False
tanh: True
times_residual: True
use_avgpool: 0
use_mse: False
use_norm: 1
use_ragan: False
use_wgan: 0
vary: 1
vgg: 0
vgg_choose: relu5_3
vgg_maxpooling: False
vgg_mean: False
which_direction: AtoB
which_epoch: latest
which_model_netD: basic
which_model_netG: sid_unet_resize
-------------- End ----------------
Empty file added data/__init__.py
Empty file.
Binary file added data/__pycache__/__init__.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/base_data_loader.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/base_dataset.cpython-36.pyc
Binary file not shown.
Binary file not shown.
Binary file added data/__pycache__/data_loader.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/image_folder.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/pair_dataset.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/single_dataset.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/test_dataset.cpython-36.pyc
Binary file not shown.
Binary file added data/__pycache__/unaligned_dataset.cpython-36.pyc
Binary file not shown.
14 changes: 14 additions & 0 deletions data/base_data_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@

class BaseDataLoader():
def __init__(self):
pass

def initialize(self, opt):
self.opt = opt
pass

def load_data():
return None



48 changes: 48 additions & 0 deletions data/base_dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import torch.utils.data as data
from PIL import Image
import torchvision.transforms as transforms
import random

class BaseDataset(data.Dataset):
def __init__(self):
super(BaseDataset, self).__init__()

def name(self):
return 'BaseDataset'

def initialize(self, opt):
pass

def get_transform(opt):
transform_list = []
if opt.resize_or_crop == 'resize_and_crop':
zoom = 1 + 0.1*radom.randint(0,4)
osize = [int(400*zoom), int(600*zoom)]
transform_list.append(transforms.Scale(osize, Image.BICUBIC))
transform_list.append(transforms.RandomCrop(opt.fineSize))
elif opt.resize_or_crop == 'crop':
transform_list.append(transforms.RandomCrop(opt.fineSize))
elif opt.resize_or_crop == 'scale_width':
transform_list.append(transforms.Lambda(
lambda img: __scale_width(img, opt.fineSize)))
elif opt.resize_or_crop == 'scale_width_and_crop':
transform_list.append(transforms.Lambda(
lambda img: __scale_width(img, opt.loadSize)))
transform_list.append(transforms.RandomCrop(opt.fineSize))
# elif opt.resize_or_crop == 'no':
# osize = [384, 512]
# transform_list.append(transforms.Scale(osize, Image.BICUBIC))

if opt.isTrain and not opt.no_flip:
transform_list.append(transforms.RandomHorizontalFlip())

transform_list += [transforms.ToTensor()]
return transforms.Compose(transform_list)

def __scale_width(img, target_width):
ow, oh = img.size
if (ow == target_width):
return img
w = target_width
h = int(target_width * oh / ow)
return img.resize((w, h), Image.BICUBIC)
50 changes: 50 additions & 0 deletions data/custom_dataset_data_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import torch.utils.data
from data.base_data_loader import BaseDataLoader


def CreateDataset(opt):
dataset = None
if opt.dataset_mode == 'aligned':
from data.aligned_dataset import AlignedDataset
dataset = AlignedDataset()
elif opt.dataset_mode == 'unaligned':
from data.unaligned_dataset import UnalignedDataset
dataset = UnalignedDataset()
elif opt.dataset_mode == 'unaligned_random_crop':
from data.unaligned_random_crop import UnalignedDataset
dataset = UnalignedDataset()
elif opt.dataset_mode == 'pair':
from data.pair_dataset import PairDataset
dataset = PairDataset()
elif opt.dataset_mode == 'test':
from data.test_dataset import PairDataset
dataset = PairDataset()
elif opt.dataset_mode == 'single':
from data.single_dataset import SingleDataset
dataset = SingleDataset()
else:
raise ValueError("Dataset [%s] not recognized." % opt.dataset_mode)

print("dataset [%s] was created" % (dataset.name()))
dataset.initialize(opt)
return dataset


class CustomDatasetDataLoader(BaseDataLoader):
def name(self):
return 'CustomDatasetDataLoader'

def initialize(self, opt):
BaseDataLoader.initialize(self, opt)
self.dataset = CreateDataset(opt)
self.dataloader = torch.utils.data.DataLoader(
self.dataset,
batch_size=opt.batchSize,
shuffle=not opt.serial_batches,
num_workers=int(opt.nThreads))

def load_data(self):
return self.dataloader

def __len__(self):
return min(len(self.dataset), self.opt.max_dataset_size)
7 changes: 7 additions & 0 deletions data/data_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

def CreateDataLoader(opt):
from data.custom_dataset_data_loader import CustomDatasetDataLoader
data_loader = CustomDatasetDataLoader()
print(data_loader.name())
data_loader.initialize(opt)
return data_loader
Loading

0 comments on commit d465847

Please sign in to comment.