I work primarily with Python, backend systems, and applied machine learning.
My experience spans model development, deployment, and performance-aware workflows in production settings.
- Backend services using Python
- ML model integration and inference pipelines
- Automation and workflow orchestration
- Improving reliability and performance of existing systems
- Advanced Python design patterns
- CUDA fundamentals and kernel-level thinking
- GPU-based image processing (CuPy)
- Performance optimization using C++, Cython, and Numba
- Efficient ML inference and system-level tradeoffs
- Python-based tooling or libraries
- ML inference or deployment-focused projects
- Automation-heavy systems or data workflows
- Performance and optimization-oriented work
- CUDA and GPU performance debugging
- Low-level optimization strategies
- Python performance improvement with C++/Rust
- Python backend development
- Image processing
- Async task systems
- ML model training, fine-tuning, and deployment
- Data pipelines for image-based ML systems
- Frameworks: Flask, FastAPI
- Databases: MongoDB, SQLite
- Async / Queues: Celery, Redis
- Cloud (GCP):
- CloudRun
- Cloud (AWS):
- EC2
- Lambda
- ECR
- SageMaker (training & serverless inference)
- Containers: Docker (image size & runtime optimization)
- GitHub issues or discussions on relevant repositories
- Python + Computer Vision
- Pandas, NumPy, OpenCV
- PyTorch (model creation, training, fine-tuning, debugging, custom loss, optimizations)
- ONNX conversion and inference workflows
- Image data collection, labeling, and cleaning
- Automation and complex workflows
- Performance optimization (time & memory)
- CUDA & GPU programming concepts
- TorchScript / TensorRT
- Triton inference
- C++ performance optimization
- Python acceleration with Cython and Numba
- Some production ML work cannot be shared publicly due to confidentiality.
- My work reflects active learning and hands-on experimentation rather than formal specialization.

