A machine learning application that predicts steering angles from F1 onboard camera footage.
https://f1-steering-angle-model.streamlit.app/
Steering input is one of the key fundamental insights into driving behavior, performance and style. However, there is no straightforward public source, tool or API to access steering angle data. The only available source is onboard camera footage, which comes with its own limitations, such as camera position, shadows, weather conditions, and lighting.
The F1 Steering Angle Prediction Model is a Convolutional Neural Network (CNN) based on EfficientNet-B0 with a regression head for angles from -180° to 180° to predict steering angles from F1 onboard camera footage (current generation F1 cars), trained with over 1500 manually annotated images.
- Video Processing: From the onboard camera video, frames are extracted at your selected FPS rate
- Image Preprocessing:
- Cropping the image to focus on the track area
- Applying CLAHE (Contrast Limited Adaptive Histogram Equalization) to enhance visibility
- Edge detection to highlight track boundaries
- Neural Network Prediction: A CNN model processes the edge image to predict the steering angle
- Postprocessing: Apply a local trend-based outlier correction algorithm to detect and correct outliers
- Results Visualization: Angles are displayed as a line chart with statistical analysis
CNN based on EfficientNet-B0 with a regression head for angles from -180° to 180°
- Input: 224x224px grayscale edge-detected images
- Backbone: EfficientNet-B0 with a regression head
- Output: Steering angle prediction between -180° and +180° with a local trend-based outlier correction algorithm
- Training data: Over 1500+ manually annotated F1 onboard footage
The preprocessing pipeline is critical for model performance:
- Grayscale Conversion: Reduces input size and complexity
- Cropping: Focuses on the track area for better predictions
- Adaptive CLAHE: Dynamically adjusts contrast to maximize track features visibility
- Edge Detection: Uses adaptive Canny edge detection targeting ~6% edge pixels per image
- Model Format: ONNX format for cross-platform compatibility and faster inference
- Batch Processing: Inference is done in batches for improved performance
Original Onboard Frame
Preprocessed Images
Left to right: Cropped image, CLAHE enhanced image, Edge detection result
After extensive development, the model has achieved the following performance metrics:
- From 0 to ±90° = 6° of ground truth
- From ±90° to ±180° = 13° of ground truth
Limitations: Performance may decrease in:
- Low visibility conditions (rain, extreme shadows)
- Low quality videos (low resolution, high compression)
- Changed camera positions (different angle, height)
- Python 3.11
- requirements.txt dependencies
The simplest way to use the application is through the hosted Streamlit app.
-
Clone the repository
git clone https://github.com/danielsaed/F1-machine-learning-webapp.git cd F1-machine-learning-webapp
-
Install dependencies
# For Linux/macOS, install OpenCV system dependencies sudo apt update sudo apt install -y libgl1-mesa-glx # Install Python dependencies pip install -r requirements.txt
-
Run the application
streamlit run streamlit_app.py
-
Open in browser
The application will be available at http://localhost:8501
Any contributions to improve the model or application are welcome.
This project is licensed under the MIT License - see the LICENSE file for details.
Downloading or recording F1 onboards videos potentially violates F1/F1TV's terms of service.
This model is for research/educational purposes only. It is not related to F1 or any other organization.