Skip to content

implement rppg(fppg) inference model using pytorch

Notifications You must be signed in to change notification settings

chenyouxin113/Pytorch_rppgs

 
 

Repository files navigation

Implement Deep Learning based Rppg Model using pytorch

model list

file list

  • dataset  :  related to dataset

    • dataset_loader.py  :  pytorch.utils.dataset stored dataset file load(.hpy)
    • __NetworkName__Dataset.py  :  Customized dataset to fit each model.
  • nets  :  related to Network Architecture
    ( funcs < layers < blocks < modules <= sub_models <= models)

    • blocks
    • funcs
    • layers
    • models
      • sub_models
    • modules
  • pyVHR : git clone at phuselab/pyVHR

  • log.py  :  custom log functions

  • loss.py  :  available loss list & custom loss functions

  • optim.py  :  available optimizer list & custom optimizer functions

  • main.py

  • params.json  : List of options for training

preprocessor list

  • __TIME__  :  check features running time

    • preprocessing time
    • model init time
    • setting loss func time
    • setting optimizer time
    • training time per 1epoch
    • inference time per 1 batch
  • __PREPROCESSING__  :  perform preprocessing before training & generate preprocessed file(.hpy)

  • __MODEL_SUMMARY__  :  print model architecture summary using torchsummary

Usages

  1. modify params.json
example
  "model_params":
    {
        "name": "DeepPhys",
        "name_comment":
                [
                    "DeepPhys",
                    "PhysNet"
                ]
    }
  1. run main.py

Additional info

* How to test (Assessment of ROI selection for facial video based rPPG)
  • before test modify sample2.cfg(./pyVHR/analysis/sample2.cfg)
[DEFAULT]
'''
methods         = ['POS','CHROM','ICA','SSR','LGI','PBV','GREEN'] # Change Method
'''
[VIDEO]
dataset     = LGI_PPGI # change dataset
videodataDIR= /media/hdd1/LGGI/ # change dataset path
BVPdataDIR  = /media/hdd1/LGGI/
;videoIdx    = all
videoIdx    = [1,2,5,6] # change test video idx
detector    = media-pipe # use media-pipe, it's proposed ROI option
  • before test, modify test suit file(./pyVHR/analysis/testsuite.py), all regions one-hot mapping.
   '''
   test for all region
    '''
    # tmp = bin(test)
    # binary = ''
    # for i in range(mask_num-len(tmp[2:])):
    #     binary += '0'
    # binary += tmp[2:]
    '''
    test for top-5 & bot -5
    '''
    if test_case == 0 :
        binary = '0011000000000000000100000001001'
    else :
        binary = '0000000001100001011000000000000'
  • run _1_rppg_assesment.py

  • all mask information found at video.py's make_mask function (./pyVHR/signals/video.py)

Contacts

Dae Yeol Kim, wagon0004@tvstorm.com

Jin Soo Kim, wlstn25092303@tvstorm.com

Kwangkee Lee, kwangkeelee@gmail.com

Funding

This work was supported by the ICT R&D program of MSIP/IITP. [2021(2021-0-00900), Adaptive Federated Learning in Dynamic Heterogeneous Environment]

reference

  1. ZitongYu/PhysNet
  2. phuselab/pyVHR

About

implement rppg(fppg) inference model using pytorch

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%