- This repo contains the codes that used in paper SalGAN360: Visual Saliency Detection on 360° images with GAN (ICMEW 2018) by Fang-Yi Chao, Lu Zhang, Wassim Hamidouche, Olivier Deforges.
- The winner in Prediction of Head+Eye Saliency for Images in Salient360! Grand Challenges at ICME’18.
Understanding visual attention of observers on 360° images gains interest along with the booming trend of Virtual Reality applications. Extending existing saliency prediction methods from traditional 2D images to 360° images is not a direct approach due to the lack of a sufficient large 360° image saliency database. In this paper, we propose to extend the SalGAN, a 2D saliency model based on the generative adversarial network, to SalGAN360 by fine tuning the SalGAN with our new loss function to predict both global and local saliency maps. Our experiments show that the SalGAN360 outperforms the tested state-of-the-art methods.
- Download SalGAN
- Matlab
Replace 01-data_preprocessing.py
, 02-train.py
, 03-predict.py
, model_salgan.py
, dataRepresentation.py
, model.py
and utils.py
in SalGAN.
- Test: To predict saliency maps, run
salgan360.m
after specifying the path to images and the path to the output saliency maps - Train:
-
- Run
preprocessing_trainingdata.m
to transfer 360° images into multiple viewports.
- Run
-
- Run
01-data_preprocessing.py
to make pickle files.
- Run
-
- Run
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=1,optimizer_including=cudnn python 02-train.py
to fine tune salgan model.
- Run
-
@INPROCEEDINGS{8551543,
author = {F. Chao and L. Zhang and W. Hamidouche and O. Deforges},
booktitle = {2018 IEEE International Conference on Multimedia Expo Workshops (ICMEW)},
title = {Salgan360: Visual Saliency Prediction On 360 Degree Images With Generative Adversarial Networks},
year = {2018},
month = {July},}