๐ Table of Contents
A powerful CNN model that classifies brain tumors into four categories with 98.32% accuracy:
- Glioma - Irregular tumors originating from glial cells
- Meningioma - Well-defined tumors of the meninges
- Pituitary - Hormone-affecting glandular tumors
- No-tumor - Healthy brain scans
- 98.32% Test Accuracy - State-of-the-art performance
- VGG-Inspired Architecture - Optimized 3x3 kernel design
- Advanced Training:
- Early stopping & model checkpointing
- Dynamic learning rate scheduling
- Real-time data augmentation (rotations/flips)
- Production-Ready:
- Flask web interface
- Docker container support
- REST API endpoints
Features:
- Drag-and-drop MRI upload
- Real-time visualization
- Detailed confidence reports
- Mobile-responsive design
# Clone repository
git clone https://github.com/AminRezaeeyan/NeuroScan.git
cd NeuroScan
# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate # Linux/Mac
# venv\Scripts\activate # Windows
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
# Run development server
flask run --host=0.0.0.0 --port=5000# Clone repository
git clone https://github.com/AminRezaeeyan/NeuroScan.git
cd NeuroScan
# Build and run containers (detached mode)
docker-compose up -d --build
# View logs (optional)
docker-compose logs -f
The pre-trained model (best_model.keras) is available in the models/ directory and can be integrated into your applications:
from PIL import Image
import numpy as np
from tensorflow.keras.models import load_model
# Load the pre-trained model
model = load_model('models/best_model.keras')
def predict_tumor(image_path):
"""
Predicts tumor class from MRI image
Args:
image_path: Path to MRI image (JPEG/PNG)
Returns:
dict: {'class': 'glioma/meningioma/pituitary/notumor',
'confidence': float,
'probabilities': dict}
"""
# Load and preprocess image using Pillow
img = Image.open(image_path)
img = img.resize((224, 224)) # Resize to 224x224
img = img.convert('RGB') # Ensure RGB format
img_array = np.expand_dims(img_array, axis=0) # Add batch dimension
# Make prediction
pred = model.predict(img_array)
classes = ['Glioma', 'Meningioma', 'No Tumor', 'Pituitary']
return {
'class': classes[np.argmax(pred)],
'confidence': float(np.max(pred)),
'probabilities': {cls: float(prob) for cls, prob in zip(classes, pred[0])}
}
# Example usage
result = predict_tumor("path/to/mri.jpg")
print(result)Sample Output:
{
'class': 'Meningioma',
'confidence': 0.9743,
'probabilities': {
'Glioma': 0.0121,
'Meningioma': 0.9743,
'Pituitary': 0.0087,
'No Tumor': 0.0049
}
}
The model automatically normalizes input images in its first layer. Do not manually normalize (divide by 255) as this will cause incorrect predictions. Simply pass the raw image array (0-255 values).
| Metric | Score |
|---|---|
| Accuracy | 98.32% |
| Precision | 98.32% |
| Recall | 98.32% |
| F1 Score | 98.32% |
Class-wise Breakdown
| Class | Precision | Recall | F1 Score |
|---|---|---|---|
| Glioma | 98.99% | 98.33% | 98.66% |
| Meningioma | 97.67% | 96.08% | 96.87% |
| Pituitary | 98.66% | 98.33% | 98.50% |
| No-tumor | 98.06% | 100% | 99.02% |
-
Continuous Accuracy Improvements: Currently targeting 99%+ through:
- Advanced attention mechanisms
- Transformer-based hybrid architectures
- Improved data augmentation pipelines
-
Tumor Segmentation: Developing pixel-level detection
-
Clinical Integration:
- DICOM/PACS support
- HL7/FHIR compatibility
- Multi-modal analysis (MRI + CT)
-
Edge Deployment:
- ONNX runtime optimization
- Fork the repository
- Create your feature branch (git checkout -b feature/AmazingFeature)
- Commit your changes (git commit -m 'Add some AmazingFeature')
- Push to the branch (git push origin feature/AmazingFeature)
- Open a Pull Request
- Improved model interpretability
- Additional medical imaging formats (DICOM, NIfTI)
- Performance optimizations
- UI/UX enhancements
- Documentation improvements
Distributed under the MIT License. See LICENSE for more information.




