Skip to content

iDeepwise/Awesome_CV_Research

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome_CV_Paper

Awesome for Paper Reading

A curated list of awesome Paper resources in Deep learning and computer vision.

To complement or correct it, please send a pull request.

image

Overview

Review

Segmentation

Detection

Reconstruction

Classification

Registration

Others


Detection -- Different NMS Variants

1. NMS:Non-Maximum Suppression.

Paper: http://arxiv.org/abs/1411.5309

Reference: https://www.coursera.org/lecture/convolutional-neural-networks/non-max-suppression-dvrjH

2. Soft-NMS:Improving Object Detection With One Line of Code.

Paper: https://arxiv.org/abs/1704.04503

Code: https://github.com/bharatsingh430/soft-nms

3. Softer-NMS: Rethinking Bounding Box Regression for Accurate Object Detection.

Paper: https://arxiv.org/abs/1809.08545v1

Code: https://github.com/yihui-he/softer-NMS

4. IoU guided NMS:Acquisition of Localization Confidence for Accurate Object Detection.

Paper: https://eccv2018.org/openaccess/content_ECCV_2018/papers/Borui_Jiang_Acquisition_of_Localization_ECCV_2018_paper.pdf

Reference: https://blog.csdn.net/qq_41648043/article/details/82716133

Code: https://github.com/vacancy/PreciseRoIPooling

5. ConvNMS:A Convnet for Non-maximum Suppression.

Paper: https://arxiv.org/abs/1511.06437

6. Pure NMS Network:Learning non-maximum suppression.

Paper: https://arxiv.org/abs/1705.02950

Code: https://github.com/hosang/gossipnet

7. Yes-Net: An effective Detector Based on Global Information.

Paper: https://arxiv.org/abs/1706.09180

8. Pairwise-NMS: Learning Pairwise Relationship for Multi-object Detection in Crowded Scenes

Paper: https://arxiv.org/abs/1901.03796

9. Relation Module: Relation Networks for Object Detection.

Paper: https://arxiv.org/abs/1711.11575

Reference: https://www.zhihu.com/question/263428989

Code: https://github.com/msracver/Relation-Networks-for-Object-Detection

Detection -- Scale Variation & Feature Concat

1. SNIP:An Analysis of Scale Invariance in Object Detection.

Paper: https://arxiv.org/abs/1711.08189

Code: https://github.com/bharatsingh430/snip

2. SNIPER: Efficient Multi-Scale Training.

Paper: https://arxiv.org/abs/1805.09300

Code: https://github.com/mahyarnajibi/SNIPER

3. HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection.

Paper: https://arxiv.org/abs/1604.00600

4. PAnet:Path Aggregation Network for Instance Segmentation.

Paper: https://arxiv.org/abs/1803.01534

Code: https://github.com/ShuLiu1993/PANet

5. Scale-Aware Face Detection.

Paper: https://arxiv.org/abs/1706.09876

6. Dynamic Zoom-in Network for Fast Object Detection in Large Images.

Paper: https://arxiv.org/abs/1711.05187

7. Zoom Out-and-In Network with Map Attention Decision for Region Proposal and Object Detection.

Paper: https://arxiv.org/abs/1709.04347

9. Scale-Aware Trident Networks for Object Detection.

Paper: https://arxiv.org/abs/1901.01892

Code: https://github.com/TuSimple/simpledet/tree/master/models/tridentnet

Attention Variants -- Detection & Segmentation

1. Attention is all you need.

Paper: https://arxiv.org/abs/1706.03762

Reference: https://zhuanlan.zhihu.com/p/48508221

2. Non-local Neural Networks.

Paper: https://arxiv.org/abs/1711.07971

Reference: https://hellozhaozheng.github.io/z_post/计算机视觉-NonLocal-CVPR2018/

Code: https://github.com/facebookresearch/video-nonlocal-net

3. Relation networks for object detection.

Paper: https://arxiv.org/abs/1711.11575

Code: https://github.com/msracver/Relation-Networks-for-Object-Detection

4. Residual attention network for image classification.

Paper: https://arxiv.org/abs/1704.06904

Reference: https://www.youtube.com/watch?v=Deq1BGTHIPA

Code: https://github.com/fwang91/residual-attention-network

5. OCNet: Object Context Network for Scene Parsing.

Paper: https://arxiv.org/abs/1809.00916

Code: https://github.com/PkuRainBow/OCNet.pytorch

6. Dual Attention Network for Scene Segmentation.

Paper: https://arxiv.org/abs/1809.02983

Code: https://github.com/junfu1115/DANet

7. Self-Attention Generative Adversarial Networks.

Paper: https://arxiv.org/abs/1805.08318

Code: https://github.com/heykeetae/Self-Attention-GAN

8. Context Encoding for Semantic Segmentation

Paper: https://arxiv.org/abs/1803.08904

Reference: https://hangzhang.org/PyTorch-Encoding/experiments/segmentation.html

Code: https://github.com/zhanghang1989/PyTorch-Encoding

9. Squeeze-and-Excitation Networks.

Paper: https://arxiv.org/abs/1711.11575

Reference: https://zhuanlan.zhihu.com/p/32702350

Code: https://github.com/hujie-frank/SENet

Detection -- Anchor Free

1. DenseBox:Unifying Landmark Localization with End to End Object Detection.

Paper: https://arxiv.org/pdf/1509.04874.pdf

Reference: https://blog.csdn.net/App_12062011/article/details/77941343

2. CornerNet: Keypoint Triplets for Object Detection.

Paper: https://arxiv.org/pdf/1808.01244.pdf

Reference: https://zhuanlan.zhihu.com/p/41825737

Code:https://github.com/princeton-vl/CornerNet

3. ExtremeNet: Bottom-up Object Detection by Grouping Extreme and Center Points.

Paper: https://arxiv.org/pdf/1901.08043.pdf

Code:https://github.com/xingyizhou/ExtremeNet

4. CenterNet:Objects as Points.

Paper: https://arxiv.org/pdf/1904.07850.pdf

Reference: https://www.infoq.cn/article/XUDiNPviWhHhvr6x_oMv

Code:https://github.com/xingyizhou/CenterNet

5. CenterNet: Keypoint Triplets for Object Detection.

Paper: https://arxiv.org/pdf/1904.08189.pdf

Reference: https://zhuanlan.zhihu.com/p/66326413

Code:https://github.com/Duankaiwen/CenterNet

6. FCOS: Fully Convolutional One-Stage Object Detection.

Paper: https://arxiv.org/abs/1904.01355.pdf

Code:https://github.com/tianzhi0549/FCOS

Lightweight Network Structure

1. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size.

Paper: https://arxiv.org/abs/1602.07360

Code: https://github.com/forresti/SqueezeNet

2. Densely Connected Convolutional Networks.

Paper: https://arxiv.org/pdf/1608.06993.pdf

Reference: https://blog.csdn.net/u014380165/article/details/75142664

Code: https://github.com/liuzhuang13/DenseNet

3. Xception: Deep Learning with Depthwise Separable Convolutions.

Paper: https://arxiv.org/abs/1610.02357

Reference: https://blog.csdn.net/u014380165/article/details/75142710

Code: https://github.com/yihui-he/Xception-caffe

4. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.

Paper: https://arxiv.org/abs/1704.04861

Reference: https://blog.csdn.net/qq_31914683/article/details/79330343

Code: https://github.com/Zehaos/MobileNet https://github.com/shicai/MobileNet-Caffe

5. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices.

Paper: https://arxiv.org/abs/1707.01083

Reference: https://blog.csdn.net/u014380165/article/details/75137111

Code: https://github.com/farmingyard/ShuffleNet

6. NASNet:Learning Transferable Architectures for Scalable Image Recognition.

Paper: https://arxiv.org/abs/1707.07012

Reference: https://blog.csdn.net/xjz18298268521/article/details/79079008 https://zhuanlan.zhihu.com/p/52616166

Code: https://github.com/yeephycho/nasnet-tensorflow

7. CondenseNet: An Efficient DenseNet using Learned Group Convolutions.

Paper: https://arxiv.org/abs/1711.09224

Reference: https://blog.csdn.net/u014380165/article/details/78747711

Code: https://github.com/ShichenLiu/CondenseNet

8. MobileNetV2: Inverted Residuals and Linear Bottlenecks.

Paper: https://arxiv.org/abs/1801.04381

Reference: https://www.cnblogs.com/hejunlin1992/p/9395345.html

Code: https://github.com/xiaochus/MobileNetV2

9. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design.

Paper: https://arxiv.org/abs/1807.11164

Reference: https://zhuanlan.zhihu.com/p/48261931

Code: https://github.com/farmingyard/ShuffleNet

10. MnasNet: Platform-Aware Neural Architecture Search for Mobile.

Paper: https://arxiv.org/abs/1807.11626

Reference: https://zhuanlan.zhihu.com/p/42474017

Code: https://github.com/AnjieZheng/MnasNet-PyTorch

11. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware.

Paper: https://arxiv.org/abs/1812.00332

Reference: https://www.cnblogs.com/wangxiaocvpr/p/10559377.html

Code: https://github.com/MIT-HAN-LAB/ProxylessNAS

12. Searching for MobileNetV3.

Paper: https://arxiv.org/abs/1905.02244v2

Reference: https://blog.csdn.net/sinat_37532065/article/details/90813655

Code: https://github.com/xiaolai-sqlai/mobilenetv3

13. MixConv: Mixed Depthwise Convolutional Kernels.

Paper: https://arxiv.org/abs/1907.09595

Reference: https://zhuanlan.zhihu.com/p/75242090

Code: https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet/mixnet

14. MoGA: Searching Beyond MobileNetV3.

Paper: https://arxiv.org/pdf/1908.01314.pdf

Reference: https://zhuanlan.zhihu.com/p/76909380

Code: https://github.com/xiaomi-automl/MoGA

Network Pruning

1. Learning both Weights and Connections for Efficient Neural Network.

Paper: https://arxiv.org/abs/1506.02626

Reference: https://xmfbit.github.io/2018/03/14/paper-network-prune-hansong/

2. Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures.

Paper: https://arxiv.org/pdf/1607.03250.pdf

Reference: https://blog.csdn.net/hsqyc/article/details/83651795

3. Learning Structured Sparsity in Deep Neural Networks.

Paper: https://arxiv.org/abs/1608.03665

Reference: https://xmfbit.github.io/2018/02/24/paper-ssl-dnn/

4. L1-norm based channel pruning(Pruning Filters for Efficient ConvNets).

Paper: https://arxiv.org/abs/1608.08710

Reference: https://blog.csdn.net/u013082989/article/details/77943240

5. Channel Pruning for Accelerating Very Deep Neural Networks.

Paper: https://arxiv.org/abs/1707.06168

Reference: https://www.jianshu.com/p/e4aeba86e14c

Code: https://github.com/yihui-he/channel-pruning

6. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression.

Paper: https://arxiv.org/pdf/1707.06342.pdf

Reference: https://blog.csdn.net/u014380165/article/details/77763037

7. Learning Efficient Convolutional Networks through Network Slimming.

Paper: https://arxiv.org/pdf/1708.06519.pdf

Reference: https://blog.csdn.net/u011995719/article/details/78788336

8. AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference.

Paper: https://arxiv.org/abs/1805.08941

Reference: https://blog.csdn.net/linlb15/article/details/102711929

9. RETHINKING THE VALUE OF NETWORK PRUNING.

Paper: https://arxiv.org/abs/1810.05270

Reference: https://blog.csdn.net/zhangjunhit/article/details/83506306

Code: https://github.com/Eric-mingjie/rethinking-network-pruning

10. SLIMMABLE NEURAL NETWORKS.

Paper: https://openreview.net/pdf?id=H1gMCsAqY7 v1

Reference: https://blog.csdn.net/qq_14845119/article/details/89453059

Code: https://github.com/JiahuiYu/slimmable_networks

8. Universally Slimmable Networks and Improved Training Techniques.

Paper: https://arxiv.org/abs/1903.05134 v2

Reference: https://www.zhihu.com/question/306865592

Code: https://github.com/JiahuiYu/slimmable_networks

9. AutoSlim: Towards One-Shot Architecture Search for Channel Numbers.

Paper: https://arxiv.org/abs/1903.11728v3

Reference: https://zhuanlan.zhihu.com/p/75518741

Code: https://github.com/JiahuiYu/slimmable_networks

Generative Adversarial Network

1. GAN: Generative Adversarial Nets

Paper: https://arxiv.org/abs/1406.2661

Reference: https://blog.csdn.net/wspba/article/details/54577236

2. CGAN: Conditional Generative Adversarial Nets

Paper: https://arxiv.org/abs/1411.1784

Reference: https://blog.csdn.net/taoyafan/article/details/81229466

Code: https://github.com/eriklindernoren/Keras-GAN/tree/master/cgan

3. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

Paper: https://arxiv.org/abs/1506.05751

Reference: https://www.cnblogs.com/wangxiaocvpr/p/5966776.html

Code: http://soumith.ch/eyescream/

4. DCGAN: unsupervised representation learning with deep convolutional generative adversarial

Paper: https://arxiv.org/abs/1511.06434

Reference: https://blog.csdn.net/liuxiao214/article/details/73500737

code: https://github.com/carpedm20/DCGAN-tensorflow

5. Improved Techniques for Training GANs

Paper: https://arxiv.org/abs/1606.03498

Reference: https://blog.csdn.net/u013972559/article/details/85545339

6. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Paper: https://arxiv.org/abs/1606.03657

Code: https://github.com/openai/InfoGAN

7. Pixel-Level Domain Transfer

Paper: https://arxiv.org/abs/1603.07442

8. ACGAN: Conditional Image Synthesis with Auxiliary Classifier GAN

Paper: https://arxiv.org/abs/1610.09585

Code:https://github.com/buriburisuri/ac-gan

9. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

Paper: https://arxiv.org/abs/1703.10593

Reference: https://blog.csdn.net/cassiepython/article/details/80942899

Code:https://github.com/junyanz/CycleGAN

Code:https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

10. FID: GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

Paper: https://arxiv.org/abs/1706.08500

Reference: https://baijiahao.baidu.com/s?id=1647349368499780367&wfr=spider&for=pc

11. LSGAN: Least Squares Generative Adversarial Networks

Paper: https://arxiv.org/abs/1611.04076v2

Reference: https://blog.csdn.net/cuihuijun1hao/article/details/83114145

Code:https://github.com/eriklindernoren/Keras-GAN/tree/master/lsgan

12. Pix2pix: Image-to-Image Translation with Conditional Adversarial Networks

Paper: https://arxiv.org/pdf/1611.07004v3.pdf

Code:https://phillipi.github.io/pix2pix/

13. TripleGAN: Triple Generative Adversarial Net

Paper: https://arxiv.org/abs/1703.02291v2

Reference: https://blog.csdn.net/Forlogen/article/details/89415400

Code: https://github.com/zhenxuan00/triple-gan

14. WGAN: Wasserstein Generative Adversarial Networks

Paper: https://arxiv.org/abs/1701.07875

Reference: https://zhuanlan.zhihu.com/p/25071913

Code: https://github.com/eriklindernoren/Keras-GAN/tree/master/wgan

15. WGAN-GP: Improved Training of Wasserstein GANs

Paper:http://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans.pdf

Code: https://github.com/eriklindernoren/Keras-GAN/tree/master/wgan_gp

16. BSGAN: Boundary-Seeking Generative Adversarial Networks

Paper: https://arxiv.org/abs/1702.08431v2

Code: https://github.com/eriklindernoren/Keras-GAN

17. How good is my GAN?

Paper: https://arxiv.org/abs/1807.09499

Reference: https://zhuanlan.zhihu.com/p/43617017

18. MUNIT: Multimodal Unsupervised Image-to-Image Translation

Paper: https://arxiv.org/abs/1804.04732

Reference: https://blog.csdn.net/MajorDong100/article/details/84335653

Code: https://github.com/nvlabs/MUNIT

19. PacGAN: The power of two samples in generative adversarial networks

Paper: http://papers.nips.cc/paper/7423-pacgan-the-power-of-two-samples-in-generative-adversarial-networks.pdf

Code: https://github.com/fjxmlzn/PacGAN

20. PGAN: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Reference: https://blog.csdn.net/weixin_42360095/article/details/89521849

Code: https://github.com/tkarras/progressive_growing_of_gans

21. Pix2pixHD: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

Reference: https://research.nvidia.com/publication/2017-12_High-Resolution-Image-Synthesis

Code: https://github.com/NVIDIA/pix2pixHD

**22. cGANs with Projection Discriminator **

Reference: https://zhuanlan.zhihu.com/p/63353147

Code: https://github.com/pfnet-research/sngan_projection

23. SNGAN: Spectral Normalization for Generative Adversarial Networks

Paper: https://arxiv.org/abs/1802.05957

Code: https://github.com/pfnet-research/sngan_projection

24. StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks

Paper: https://arxiv.org/abs/1812.04948

Reference: http://www.sohu.com/a/282014920_129720

25. StyleGANv2: Analyzing and Improving the Image Quality of StyleGAN

Paper: http://arxiv.org/abs/1912.04958

Reference: https://blog.csdn.net/WinerChopin/article/details/103538073

Code: https://github.com/NVlabs/stylegan2

26. bigGAN: Large scale GANtraining for high fidelity natural image synthesis

Paper: https://arxiv.org/abs/1809.11096

Code: https://github.com/ajbrock/BigGAN-PyTorch

Releases

No releases published

Packages

No packages published