Skip to content

Commit

Permalink
anti spoofing
Browse files Browse the repository at this point in the history
  • Loading branch information
serengil committed Jun 7, 2024
1 parent d26a981 commit f35abd1
Show file tree
Hide file tree
Showing 18 changed files with 991 additions and 79 deletions.
24 changes: 22 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,27 @@ user
│ │ ├── Bob.jpg
```

**Face Anti Spoofing** - `Demo`

DeepFace also includes an anti-spoofing analysis module to understand given image is real or fake. To activate this feature, set the `anti_spoofing` argument to True in any DeepFace tasks.

<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/face-anti-spoofing.jpg" width="40%" height="40%"></p>

```python
# anti spoofing test in face detection
face_objs = DeepFace.extract_faces(
img_path="dataset/img1.jpg",
anti_spoofing = True
)
assert face_objs[0]["is_real"] is True

# anti spoofing test in real time analysis
DeepFace.stream(
db_path = "C:/User/Sefik/Desktop/database",
anti_spoofing = True
)
```

**API** - [`Demo`](https://youtu.be/HeKCQ6U9XmI)

DeepFace serves an API as well - see [`api folder`](https://github.com/serengil/deepface/tree/master/deepface/api/src) for more details. You can clone deepface source code and run the api with the following command. It will use gunicorn server to get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
Expand Down Expand Up @@ -418,7 +439,6 @@ Also, if you use deepface in your GitHub projects, please add `deepface` in the

DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/deepface/blob/master/LICENSE) for more details.

DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md) (both 128d and 512d), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE). License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes.

DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md) (both 128d and 512d), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE). Finally, DeepFace is optionally using [face anti spoofing](https://github.com/minivision-ai/Silent-Face-Anti-Spoofing/blob/master/LICENSE) to determine the given images are real or fake. License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes.

DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).
34 changes: 32 additions & 2 deletions deepface/DeepFace.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ def verify(
normalization: str = "base",
silent: bool = False,
threshold: Optional[float] = None,
anti_spoofing: bool = False,
) -> Dict[str, Any]:
"""
Verify if an image pair represents the same person or different persons.
Expand Down Expand Up @@ -113,6 +114,8 @@ def verify(
If left unset, default pre-tuned threshold values will be applied based on the specified
model name and distance metric (default is None).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
result (dict): A dictionary containing verification results with following keys.
Expand Down Expand Up @@ -150,6 +153,7 @@ def verify(
normalization=normalization,
silent=silent,
threshold=threshold,
anti_spoofing=anti_spoofing,
)


Expand All @@ -161,6 +165,7 @@ def analyze(
align: bool = True,
expand_percentage: int = 0,
silent: bool = False,
anti_spoofing: bool = False,
) -> List[Dict[str, Any]]:
"""
Analyze facial attributes such as age, gender, emotion, and race in the provided image.
Expand Down Expand Up @@ -189,6 +194,8 @@ def analyze(
silent (boolean): Suppress or allow some log messages for a quieter analysis process
(default is False).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
results (List[Dict[str, Any]]): A list of dictionaries, where each dictionary represents
the analysis results for a detected face. Each dictionary in the list contains the
Expand Down Expand Up @@ -245,6 +252,7 @@ def analyze(
align=align,
expand_percentage=expand_percentage,
silent=silent,
anti_spoofing=anti_spoofing,
)


Expand All @@ -261,6 +269,7 @@ def find(
normalization: str = "base",
silent: bool = False,
refresh_database: bool = True,
anti_spoofing: bool = False,
) -> List[pd.DataFrame]:
"""
Identify individuals in a database
Expand Down Expand Up @@ -301,8 +310,10 @@ def find(
(default is False).
refresh_database (boolean): Synchronizes the images representation (pkl) file with the
directory/db files, if set to false, it will ignore any file changes inside the db_path
(default is True).
directory/db files, if set to false, it will ignore any file changes inside the db_path
(default is True).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
results (List[pd.DataFrame]): A list of pandas dataframes. Each dataframe corresponds
Expand Down Expand Up @@ -335,6 +346,7 @@ def find(
normalization=normalization,
silent=silent,
refresh_database=refresh_database,
anti_spoofing=anti_spoofing,
)


Expand All @@ -346,6 +358,7 @@ def represent(
align: bool = True,
expand_percentage: int = 0,
normalization: str = "base",
anti_spoofing: bool = False,
) -> List[Dict[str, Any]]:
"""
Represent facial images as multi-dimensional vector embeddings.
Expand Down Expand Up @@ -375,6 +388,8 @@ def represent(
Default is base. Options: base, raw, Facenet, Facenet2018, VGGFace, VGGFace2, ArcFace
(default is base).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
results (List[Dict[str, Any]]): A list of dictionaries, each containing the
following fields:
Expand All @@ -399,6 +414,7 @@ def represent(
align=align,
expand_percentage=expand_percentage,
normalization=normalization,
anti_spoofing=anti_spoofing,
)


Expand All @@ -411,6 +427,7 @@ def stream(
source: Any = 0,
time_threshold: int = 5,
frame_threshold: int = 5,
anti_spoofing: bool = False,
) -> None:
"""
Run real time face recognition and facial attribute analysis
Expand All @@ -437,6 +454,8 @@ def stream(
time_threshold (int): The time threshold (in seconds) for face recognition (default is 5).
frame_threshold (int): The frame threshold for face recognition (default is 5).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
None
"""
Expand All @@ -453,6 +472,7 @@ def stream(
source=source,
time_threshold=time_threshold,
frame_threshold=frame_threshold,
anti_spoofing=anti_spoofing,
)


Expand All @@ -463,6 +483,7 @@ def extract_faces(
align: bool = True,
expand_percentage: int = 0,
grayscale: bool = False,
anti_spoofing: bool = False,
) -> List[Dict[str, Any]]:
"""
Extract faces from a given image
Expand All @@ -485,6 +506,8 @@ def extract_faces(
grayscale (boolean): Flag to convert the image to grayscale before
processing (default is False).
anti_spoofing (boolean): Flag to enable anti spoofing (default is False).
Returns:
results (List[Dict[str, Any]]): A list of dictionaries, where each dictionary contains:
Expand All @@ -497,6 +520,12 @@ def extract_faces(
instead of observer.
- "confidence" (float): The confidence score associated with the detected face.
- "is_real" (boolean): antispoofing analyze result. this key is just available in the
result only if anti_spoofing is set to True in input arguments.
- "antispoof_score" (float): score of antispoofing analyze result. this key is
just available in the result only if anti_spoofing is set to True in input arguments.
"""

return detection.extract_faces(
Expand All @@ -506,6 +535,7 @@ def extract_faces(
align=align,
expand_percentage=expand_percentage,
grayscale=grayscale,
anti_spoofing=anti_spoofing,
)


Expand Down
45 changes: 16 additions & 29 deletions deepface/api/src/modules/core/routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,17 +24,13 @@ def represent():
if img_path is None:
return {"message": "you must pass img_path input"}

model_name = input_args.get("model_name", "VGG-Face")
detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
align = input_args.get("align", True)

obj = service.represent(
img_path=img_path,
model_name=model_name,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
model_name=input_args.get("model_name", "VGG-Face"),
detector_backend=input_args.get("detector_backend", "opencv"),
enforce_detection=input_args.get("enforce_detection", True),
align=input_args.get("align", True),
anti_spoofing=input_args.get("anti_spoofing", False),
)

logger.debug(obj)
Expand All @@ -58,20 +54,15 @@ def verify():
if img2_path is None:
return {"message": "you must pass img2_path input"}

model_name = input_args.get("model_name", "VGG-Face")
detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
distance_metric = input_args.get("distance_metric", "cosine")
align = input_args.get("align", True)

verification = service.verify(
img1_path=img1_path,
img2_path=img2_path,
model_name=model_name,
detector_backend=detector_backend,
distance_metric=distance_metric,
align=align,
enforce_detection=enforce_detection,
model_name=input_args.get("model_name", "VGG-Face"),
detector_backend=input_args.get("detector_backend", "opencv"),
distance_metric=input_args.get("distance_metric", "cosine"),
align=input_args.get("align", True),
enforce_detection=input_args.get("enforce_detection", True),
anti_spoofing=input_args.get("anti_spoofing", False),
)

logger.debug(verification)
Expand All @@ -90,17 +81,13 @@ def analyze():
if img_path is None:
return {"message": "you must pass img_path input"}

detector_backend = input_args.get("detector_backend", "opencv")
enforce_detection = input_args.get("enforce_detection", True)
align = input_args.get("align", True)
actions = input_args.get("actions", ["age", "gender", "emotion", "race"])

demographies = service.analyze(
img_path=img_path,
actions=actions,
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
actions=input_args.get("actions", ["age", "gender", "emotion", "race"]),
detector_backend=input_args.get("detector_backend", "opencv"),
enforce_detection=input_args.get("enforce_detection", True),
align=input_args.get("align", True),
anti_spoofing=input_args.get("anti_spoofing", False),
)

logger.debug(demographies)
Expand Down
30 changes: 27 additions & 3 deletions deepface/api/src/modules/core/service.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,14 @@
# pylint: disable=broad-except


def represent(img_path, model_name, detector_backend, enforce_detection, align):
def represent(
img_path: str,
model_name: str,
detector_backend: str,
enforce_detection: bool,
align: bool,
anti_spoofing: bool,
):
try:
result = {}
embedding_objs = DeepFace.represent(
Expand All @@ -12,6 +19,7 @@ def represent(img_path, model_name, detector_backend, enforce_detection, align):
detector_backend=detector_backend,
enforce_detection=enforce_detection,
align=align,
anti_spoofing=anti_spoofing,
)
result["results"] = embedding_objs
return result
Expand All @@ -20,7 +28,14 @@ def represent(img_path, model_name, detector_backend, enforce_detection, align):


def verify(
img1_path, img2_path, model_name, detector_backend, distance_metric, enforce_detection, align
img1_path: str,
img2_path: str,
model_name: str,
detector_backend: str,
distance_metric: str,
enforce_detection: bool,
align: bool,
anti_spoofing: bool,
):
try:
obj = DeepFace.verify(
Expand All @@ -31,13 +46,21 @@ def verify(
distance_metric=distance_metric,
align=align,
enforce_detection=enforce_detection,
anti_spoofing=anti_spoofing,
)
return obj
except Exception as err:
return {"error": f"Exception while verifying: {str(err)}"}, 400


def analyze(img_path, actions, detector_backend, enforce_detection, align):
def analyze(
img_path: str,
actions: list,
detector_backend: str,
enforce_detection: bool,
align: bool,
anti_spoofing: bool,
):
try:
result = {}
demographies = DeepFace.analyze(
Expand All @@ -47,6 +70,7 @@ def analyze(img_path, actions, detector_backend, enforce_detection, align):
enforce_detection=enforce_detection,
align=align,
silent=True,
anti_spoofing=anti_spoofing,
)
result["results"] = demographies
return result
Expand Down
31 changes: 31 additions & 0 deletions deepface/commons/file_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# built-in dependencies
import os

# 3rd party dependencies
import gdown

# project dependencies
from deepface.commons import logger as log

logger = log.get_singletonish_logger()


def download_external_file(file_name: str, exact_file_path: str, url: str) -> None:
"""
Download an external file
Args:
file_name (str): file name with extension
exact_file_path (str): exact location of the file with file name
url (str): url to be downloaded
Returns:
None
"""
if os.path.exists(exact_file_path) is False:
logger.info(f"Downloading MiniFASNetV2 weights to {exact_file_path}")
try:
gdown.download(url, exact_file_path, quiet=False)
except Exception as err:
raise ValueError(
f"Exception while downloading {file_name} from {url} to {exact_file_path}."
"You may consider to download it and copy to the target destination."
) from err
Loading

0 comments on commit f35abd1

Please sign in to comment.