代码很挫,以后再整理
现在是草稿形式的整理,后面有时间了,再慢慢整理,,这里面,介绍怎么使用,也写得不清楚,各位稍稍摸索下吧,提问后,我空了再来回答。
search_paramter.py是用来 开多窗口,一次训练跑多个实验的脚本。它调用train.py
train.py里面也可以修改,用train.py单独跑单个实验或者重启中断的实验
evaluate.py则为单独用来跑测试的。
代码里面,有太多跟本实验无关的脚本,我后续再来整理。
实验做的过程中,没进行git管理,代码改动得太多,原来的实验的权重,可能跑不起来,但建议可以重新跑一下,很快的,参数配置里width设为0.125,depth=1/3,跑得超级快
,轻量级时学习率0.01,0.001,收敛得快;非轻量级时可设得更低为0.0001。
在fullimg branch分支里,提供了一个权重文件,改动得比较小。
在跑分时,我把限制条件放宽了,跑出来的AP很高,不放宽的话,设严谨了,则会低一些。毕竟不是打榜,也不发论文,感觉90+差不多就行了。 这个代码版本和实验记录。是一路调参过程中累计的。 毕竟这个问题到手(车位检测),只花了半月去琢磨,搞出了这个模型,后面大部分时间都是在跟 数据集较劲和工程C++上跟踪增删查改较劲,所以实验记录都是一点点加的,一开始没考虑那么多,属草稿形式。只能是空了再来整理。连benchmark和评价标准的代码,也是后面补的。先将就用吧。
超轻量级的(指flops 几十M这种),推理出来的效果有点差强人意,把flops整大点,效果会非常好。 至于超轻量级flops 几十这种。是通过PPLCNet的banckbone来改的,这个快,最快。这个backbone优点很多。 而默认用YOLOX的darknet,同级别效果要好点。 哦,对了,端到端,无后处理。要的就是这种!而且结构简单
link:
supplementary material: videos gpsd
experiments records: 200 exps
one experiment pretrain weight 部分实验的预训练权重:https://pan.baidu.com/s/1CdqcPhMfPQMat9m3i51-Vw 提取码:gpsd
or
the mAP is calculated by setting the threshold confidence to 0.
However, I do not open the strict limited condition. Later, I open the limited condition in dataset/process.py->match_marking_points(), the mAP is a obvious decline/descend. What's more, we can transform the major metric from mAP to Recall.
master is the subimage version, and the fullimage is the fullimage version
search_hyperparams.py run more than 100+ experiments through configuration, very convenient to manage.
baidu netdisk:
- tongji dataset
ps2.0 gpsd
ps2.0_convert gpsd
- seoul dateset
PIL_park gpsd
- diy
benchmark gpsd
All in here, (old link is not available, now this link is avaliable, update at 202309): gpsd_datasets gpsd
- numpy
- opencv
- torch
By now, this project use this pre-processing, and the previous version do not use.
- You need download the raw dataset 'PIL-park' and 'ps2.0_convert' from https://github.com/dohoseok/context-based-parking-slot-detect/
- Please refer to the file in this project: './dataset/pairable/pairable_dataset_maker_v2.py'
- Modify the data path in the file and enable the function.
- python ./dataset/pairable/pairable_dataset_maker_v2.py
You should create an tmux windows called 'exp' (as same as 'session_name' you have setted in file 'search_hyperparams.py' )
tmux new -s session_name
And then,
python search_hyperparams.py
- All the experiments will run in the tmux windows. After training or debug, you should close the subwindows which created by search_hyperparams.py.
- The parameters setting in search_hyperparams.py will be saved in json file and the train.py as the actual runner read corresponding json file to start the training.
- Tensorboard log file saved under the 'experiments/xxxx'. The 'xxxx' is corresponding to the 'name' which setting in search_hyperparams.py in fuction 'experiment'.
- 'experiments/params.json' is the template configure file, you could change the data path and other hyperparams in it.
If you want to run train.py, you need modify some params in the train.py.
python evaluate.py
use this
- /home/data/lwb/pairable_parking_slot/seoul
- /home/data/lwb/pairable_parking_slot/tongji
or - /home/data/lwb/data/ParkingSlot/public (include seoul+tongji, merge train and test)
- /home/data/lwb/data/ParkingSlot/in99
use this command to start TensorboardX service
tensorboard --logdir_spec exp_1:add_1,exp_2:add_2 --bind_all --port xxxx