- ๐ Hi, Iโm Qingyun Li (ๆ้ไบ).
- ๐ Iโm interested in Large Multimodal Model/Data, Object Detection/Segmentation, Weakly Supervised Learning, and Remote Sensing Image Interpreting.
- ๐ฑ Iโm currently a Phd candidate at Harbin Institude of Technology (HIT), supervised by Prof. Yushi Chen. And I've participated in research projects at OpenGVLab of Shanghai AI Laboratory, collaborated with Xue Yang and Wenhai Wang.
- ๐๏ธ I've been an active contributor of MMDetection, collaborated with Shilong Zhang.
๐ฎ
Hail Zelda! ๐
Highlights
- Pro
Pinned Loading
-
OpenGVLab/OmniCorpus
OpenGVLab/OmniCorpus PublicOmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
-
OpenGVLab/all-seeing
OpenGVLab/all-seeing Public[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
-
sam-mmrotate
sam-mmrotate PublicSAM (Segment Anything Model) for generating rotated bounding boxes with MMRotate, which is a comparison method of H2RBox-v2.
-
open-mmlab/mmdetection
open-mmlab/mmdetection PublicOpenMMLab Detection Toolbox and Benchmark
-
Spa-Spe-TR
Spa-Spe-TR Public[GRSL 2022] Official implementation of "Two-Branch Pure Transformer for Hyperspectral Image Classification".
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.