Skip to content

mdcnn/JGF2022

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Joint Depth Map Super-Resolution Method via Deep Hybrid-Cross Guidance Filter (Pattern Recognition2023)

Authors:Ke Wang, Lijun Zhao, Jinjing Zhang, Jialong Zhang, Anhong Wang, Huihui Bai

Abstract:

Nowadays color guided depth Super-Resolution (SR) methods mainly have three thorny problems: (1) joint depth SR methods have serious detail and structure loss at very high sampling rate; (2) existing depth SR networks have high computational complexity; (3) color-depth inconsistency makes it hard to fuse dual-modality features. To resolve these problems, we propose Joint hybrid-cross Guidance Filter (JGF) method to progressively recover the quality of degraded Low-Resolution (LR) depth maps by exploiting color-depth consistency from multiple perspectives. Specifically, the proposed method leverages pyramid structure to extract multi-scale features from High-Resolution (HR) color image. At each scale, hybrid side window filter block is proposed to achieve high-efficiency color feature extraction after each down-sampling for HR color image. This block is also used to extract depth features from LR depth maps. Meanwhile, we propose a multi-perspective cross-guided fusion filter block to progressively fuse high-quality multi-scale structure information of color image with corresponding enhanced depth features. In this filter block, two kinds of space-aware group-compensation modules are introduced to capture various spatial features from different perspectives. Meanwhile, color-depth cross-attention module is proposed to extract color-depth consistency features for impactful boundary preservation. Comprehensively qualitative and quantitative experimental results have demonstrated that our method can achieve superior performances against a lot of state-of-the-art depth SR approaches in terms of mean absolute deviation and root mean square error on Middlebury, NYU-v2 and RGB-D-D datasets.

Our training codes are publicly available here.

Testing and Training Datasets:

You can download datasets by clicking here.

The JGF Result.

Please click here to download results.

Supplementary Material

Link1, Link2 and Link3

Citation

If you find our work useful for your research, please cite us:

@article{wang2023joint,
  title={Joint depth map super-resolution method via deep hybrid-cross guidance filter},
  author={Wang, Ke and Zhao, Lijun and Zhang, Jinjing and Zhang, Jialong and Wang, Anhong and Bai, Huihui},
  journal={Pattern Recognition},
  volume={136},
  pages={109260},
  year={2023},
  publisher={Elsevier}
}
@inproceedings{zhang2023explainable,
  title={Explainable Unfolding Network For Joint Edge-Preserving Depth Map Super-Resolution},
  author={Zhang, Jialong and Zhao, Lijun and Zhang, Jinjing and Wang, Ke and Wang, Anhong},
  booktitle={2023 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={888--893},
  year={2023},
  organization={IEEE}
}
@article{ye2020pmbanet,
  title={PMBANet: Progressive multi-branch aggregation network for scene depth super-resolution},
  author={Ye, Xinchen and Sun, Baoli and Wang, Zhihui and Yang, Jingyu and Xu, Rui and Li, Haojie and Li, Baopu},
  journal={IEEE Transactions on Image Processing},
  volume={29},
  pages={7427--7442},
  year={2020},
  publisher={IEEE}
}
*****Our codes are benefited from the project of PMBANet. Thank the authors of PMBANet for sharing their codes.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages