Releases: Shankar203/Microsoft-Engage-FaceRecognition
YOLOv5TF v1.1.0
This YOLOV5 Model which was PreTrained on COCO Dataset by Ultralytics available here, was FineTuned on CrowdHuman Dataset available here, is then exported to TensorFlow.js to run inference on Node backend
. It will generate Bounding Boxes of both faces and Persons at very high mAP, and can run inference around 224ms
on CPU b1, 8.2ms
on GPU v100 b1.
Run Model Inference on TensorflowJS
Model takes input of the path pointing to model.json
file, The input shape of the model on which it is trained is [640,640]
. Now do the basic preprocessing stuff changing dtype, resizing, dividing by 255, Remember during resizing the aspect ratio shouldn't change. As this is a graph model use tf.loadGraphModel
to run model inference, it finally outputs bboxs in format [xmin,ymin,xmax,ymax]
const tf = require("@tensorflow/tfjs-node");
const YOLOv5_MODEL_PTH = "https://github.com/Shankar203/Microsoft-Engage-FaceRecognition/releases/download/YOLOv5/model.json";
const YOLOv5_INPUT_SHAPE = [640, 640]
const processImg = (img, tarSize) => {
img = tf.cast(img, "float32");
img = resizeImg(img, tarSize);
img = tf.expandDims(img, 0);
img = tf.div(img, 255);
return img;
};
// Resize image without changing aspect ratio (imp)
const resizeImg = (img, tarSize) => {
var [h, w] = img.shape;
var [h_tar, w_tar] = tarSize;
var ratio = Math.max(h/h_tar, w/w_tar);
var padh = parseInt((h_tar*ratio - h) / 2);
var padw = parseInt((w_tar*ratio - w) / 2);
img = tf.pad(img, [[padh,padh],[padw, padw],[0,0]]);
img = tf.image.resizeBilinear(img, tarSize);
return img;
};
const getBBox = async (img) => {
var img = processImg(img, YOLOv5_INPUT_SHAPE);
var model = await tf.loadGraphModel(YOLOv5_MODEL_PTH);
var preds = await model.executeAsync(img);
var [bboxes, scores, classes, num_valid_detections] = preds;
console.log(bboxes, scores, classes, num_valid_detections);
};
A YOLOv5 tfjs demo web-app is available here. You could drag a file to the center box to detect objects with your custom pretrained yolov5 tfjs model. To run this mentioned demo on local machine,
$ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
$ npm install
$ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
$ npm start
Convert YOLOv5 Model from pytorch to tfjs
First clone the repo and install dependencies, Now download the pytorch model from the below link, Run the export.py file pointing to the downloaded model.
- YOLOv5 PyTorch Model finetuned on CrowdHuman, available here
- YOLOv5 TensorFlow.js Model finetuned on CrowdHuman, available here
# clone repo, install dependencies
$ git clone https://github.com/ultralytics/yolov5 && cd yolov5
$ pip install -r requirements.txt
# now convert to tfjs graph model
$ python path/to/export.py --weights path/to/yolov5m-pytorch.pt --include tfjs
Test
Inference of this finetuned model on the world's largest selfie.
References
- YOLOv5 Ultralytics Repository (pretrained on COCO Dataset), https://ultralytics.com/yolov5
- YOLOv5 Repository (finetuned on CrowdHuman Dataset), https://github.com/deepakcrk/yolov5-crowdhuman
- YOLOv5 CrowdHuman PyTorch weights, https://drive.google.com/file/d/1gglIwqxaH2iTvy6lZlXuAcMpd_U0GCUb
- YOLOv5 CrowdHuman TensorFlow.js weights, https://github.com/Shankar203/Microsoft-Engage-FaceRecognition/releases/download/YOLOv5/yolov5-tfjs.zip
- YOLOv5 tfjs demo webapp, https://github.com/zldrobit/tfjs-yolov5-example
ArcFaceTF v1.1.0
TensorFlow.js implementation of ArcFace
ArcFace (Additive Angular Margin Loss for Deep Face Recognition), published in CVPR 2019. Was officially implemented by DeepInsight available here. This Model was pretrained and benchmarked on MS1M, VGG2 and CASIA-Webface datasets, It is exported to TensorFlow.js to run inference on node backend
,
Additive Angular Margin Loss(ArcFace) has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere, and consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead.
Run Model Inference on TensorflowJS
Model takes input of the path pointing to model.json
file, The input shape of the model on which it is trained is [112,112]
, and outputs embeddings of shape (1,512)
. Do the basic preprocessing stuff changing dtype, resizing, dividing by 255, Remember during resizing the aspect ratio shouldn't change. As this is keras model use use tf.loadLayersModel
to run model inference, aslo don't forget to Normalize the embeddings before passing it to cosineDistance function.
const tf = require("@tensorflow/tfjs-node");
const ArcFace_MODEL_PTH = "https://github.com/Shankar203/Microsoft-Engage-FaceRecognition/releases/download/ArcFace/model.json";
const ArcFace_INPUT_SHAPE = [112, 112];
const ArcFace_THRESHOLD = 0.68;
const processImg = (img, tarSize) => {
img = tf.cast(img, "float32");
img = resizeImg(img, tarSize);
img = tf.expandDims(img, 0);
img = tf.div(img, 255);
return img;
};
// Resize image without changing aspect ratio (imp)
const resizeImg = (img, tarSize) => {
var [h, w] = img.shape;
var [h_tar, w_tar] = tarSize;
var ratio = Math.max(h/h_tar, w/w_tar);
var padh = parseInt((h_tar*ratio - h) / 2);
var padw = parseInt((w_tar*ratio - w) / 2);
img = tf.pad(img, [[padh,padh],[padw, padw],[0,0]]);
img = tf.image.resizeBilinear(img, tarSize);
return img;
};
const getEmbeddings = async (img) => {
var img = processImg(img, ArcFace_INPUT_SHAPE);
var model = await await tf.loadLayersModel(ArcFace_MODEL_PTH);
var ebd = model.predict(img);
ebd = ebd.div(ebd.norm()).squeeze();
return ebd;
};
const compare = async (imgBuffer1, imgBuffer2, threshold = ArcFace_THRESHOLD) => {
var img1 = tf.node.decodeImage(imgBuffer1, (channels = 3));
var img1 = tf.node.decodeImage(imgBuffer1, (channels = 3));
var ebd1 = await getEmbeddings(img1);
var ebd2 = await getEmbeddings(img2);
var cosDist = tf.losses.cosineDistance(ebd1, ebd2);
var similar = cosDist.arraySync() <= threshold;
return similar;
};
Convert ArcFace Model from tensorflow to tfjs
First install tensorflowjs
python library using pip
/conda
, Now run the tensorflowjs_converter
command with input_format keras to convert them to tensorflowjs layers model.
- ArcFace TensorFlow Model Weights, available here
- ArcFace TensorFlow.js Model Weights, available here
$ pip install -q tensorflowjs
$ tensorflowjs_converter --input_format keras \
./arcface/arcface.h5 \
./ArcFaceJS
Model Benchmarking
Backbone | Head | LFW | AgeDB-30 | CFP-FP |
---|---|---|---|---|
ResNet50 | ArcFace | 99.42 | 95.32 | 92.56 |
Model Architecture
References
- ArcFace arXiv Official Paper, https://arxiv.org/abs/1801.07698
- InsightFace Repository from DeepInsight (ArcFace official release), https://github.com/deepinsight/insightface
- ArcFace implementation in TensorFlow, https://github.com/peteryuX/arcface-tf2
- ArcFace TensorFlow weights, https://github.com/serengil/deepface_models/releases/download/v1.0/arcface_weights.h5
- DeepFace Repository, https://github.com/serengil/deepface