OralGuard is a mobile-friendly deep learning application designed to assist in the early detection of oral lesions (benign vs malignant) using computer vision.
Built on MobileNetV3 with PyTorch transfer learning, the model is optimized for real-time predictions on mobile devices and integrates with a Flask REST API for deployment.
This repository contains two main sub directory which are:
mobile- The mobile app that does classification by sending requests to the API server using images of a oral lesions images.server- This is an API server that serves anMobileNetV3model that does cancer classification based on an oral lesion.
- Oral lesion classification (malignant vs benign) from mouth images
- Lightweight & efficient model powered by MobileNetV3
- REST API built with Flask for real-time predictions
- Trained on the Oral Lesions: Malignancy Detection Dataset
- Model exported as static PyTorch file (
.pt) for cross-platform usage - Local Authentication
- PyTorch & TorchVision – Transfer learning with MobileNetV3
- Flask – REST API for serving predictions
- Python – Data preprocessing, training, evaluation
- NumPy / Matplotlib – Data handling & visualization
-
Data Preparation
- Dataset split into training, validation, and testing sets
- Augmentation applied (rotation, zoom, flip, etc.)
-
Model Training
- MobileNetV3 backbone + custom classifier head
- Trained using transfer learning in PyTorch
- Best weights saved as
.ptfile
-
Deployment
- Flask REST API wraps the trained model
- Mobile app sends an image → API preprocesses → model predicts → returns JSON (
benign/malignant)
- Capture or upload an image of oral lesions.
- The app sends the image to the AI server for analysis.
- Receive instant prediction results.
- Support multiple cancer types.
- Offline predictions for areas with poor connectivity
The following screenshots shows the basic UI of the mobile application.
Clone Repository
git clone https://github.com/crispengari/OralGuard.git
cd OralGuardThen navigate to the server and activate the virtual environment.
#
cd server
python -m venv venv
source venv/bin/activate # On Linux/Mac
venv\Scripts\activate # On WindowsThen install packages
pip install -r requirements.txtYou can then run the server as follows:
python app.pyFirst navigate to the mobile folder:
cd mobileInstall the packages
yarnThen you can start the expo go dev server
yarn startThe following commands can be used to test the API using cURL.
# benign
cURL -X POST -F image=@benign.jpg http://127.0.0.1:8000/api/v1/oral-cancer/predict
# malignant
cURL -X POST -F image=@malignant.jpg http://127.0.0.1:8000/api/v1/oral-cancer/predict
The following is the API expected response.
{
"time": 0.09074282646179199,
"ok": true,
"status": "ok",
"prediction": { "label": 1, "class_label": "malignant", "probability": 1.0 }
}The notebooks that were used to train the model can be found in this folder 13_ORAL_CANCER_LESIONS.
This project is using the MIT LICENSE.










