π Vancouver, BC, Canada π¨π¦ | πΌ Open to MLOps & ML Infrastructure opportunities
class MLOpsEngineer:
def __init__(self):
self.name = "Li-Yu Chu"
self.role = "MLOps Engineer"
self.location = "Vancouver, BC π¨π¦"
self.experience = "2+ years production ML infrastructure"
self.core_skills = [
"Serverless ML Inference (<100ms latency)",
"Distributed Systems & Fault Tolerance",
"Infrastructure as Code (Terraform)",
"Production AWS Architecture"
]
self.certifications = [
"β
AWS Solutions Architect Associate",
"β
HashiCorp Terraform Associate"
]
def current_focus(self):
return [
"Building end-to-end MLOps pipelines",
"Optimizing model serving infrastructure",
"Implementing production monitoring systems"
]
def say_hi(self):
print("π Building scalable ML infrastructure that powers real-world applications!")
engineer = MLOpsEngineer()
engineer.say_hi()Why work with me?
- β 2+ years building production ML systems at scale
- β Live projects with measurable impact (<100ms inference, $0 hosting costs)
- β Strong foundation in distributed systems and cloud architecture
- β Full-stack MLOps: from training orchestration to model monitoring
οΏ½οΏ½ MLOps Portfolio - Complete ML Infrastructure
Live Demo: luichu.dev | Production-grade MLOps showcase
Highlights:
- π Comprehensive portfolio demonstrating end-to-end ML infrastructure
- π― Real production systems with quantifiable metrics
- π Cost-optimized architecture ($0/month hosting)
- π§ Multi-environment setup with CI/CD automation
Impact: Portfolio designed to pass technical and HR interviews for MLOps roles
β‘ Chainy Backend - Serverless ML Infrastructure
Production AWS Lambda architecture for ML model serving
MLOps Features:
- π Sub-100ms latency - Optimized for real-time inference
- π Auto-scaling - 0 to 1000+ req/s with no manual intervention
- π° Cost-optimized - 90% cheaper than traditional EC2 hosting
- π Enterprise security - WAF, IAM, JWT authentication
- π Full observability - CloudWatch metrics, dashboards, alerts
Tech Stack: AWS Lambda, DynamoDB, Terraform, TypeScript, API Gateway
Use Cases: Model serving APIs, feature stores, real-time predictions
π Raft-Recovery - Distributed Job Orchestrator
Fault-tolerant job queue for mission-critical workloads
Production Features:
- πΎ Zero data loss - Write-Ahead Log ensures durability
- β‘ High throughput - 250+ jobs/second with concurrent processing
- π‘οΈ Crash recovery - Sub-3s recovery time with snapshots
- π§ Raft consensus - Distributed coordination and leader election
- π Prometheus metrics - Production monitoring built-in
Tech Stack: Go, Raft Consensus, Write-Ahead Log, Distributed Systems
Use Cases: ML training orchestration, ETL pipelines, batch processing
MLOps Expertise:
- π― Model serving & deployment (Lambda, SageMaker, custom APIs)
- π Experiment tracking (MLflow, DVC, model registry)
- π CI/CD pipelines (GitHub Actions, automated testing)
- π Monitoring & observability (CloudWatch, Prometheus, drift detection)
- π° Cost optimization (serverless, right-sizing, budget alerts)
Cloud Services:
- βοΈ AWS: Lambda, DynamoDB, S3, SageMaker, CloudWatch, Step Functions
- ποΈ IaC: Terraform (multi-env), CloudFormation
- π Security: IAM, WAF, Secrets Manager, KMS
π’ Software Engineer @ HiTrust, Inc. (Jan 2023 β Dec 2024)
- Developed secure microservices handling millions of financial requests
- Optimized data pipelines β 30% performance improvement
- Managed Kubernetes deployments for production services
- Implemented monitoring infrastructure with comprehensive logging
π¬ Product Planner (Data & ML) @ Astra Technology (Oct 2018 β Dec 2019)
- Built time-series prediction models using Python stack
- Led Computer Vision PoC in collaboration with NTT Japan
- Defined technical requirements for ML model production deployment
| Project | Metric | Result |
|---|---|---|
| Chainy | Inference Latency | <100ms (p95) |
| Chainy | Cost Reduction | 90% vs EC2 hosting |
| Raft-Recovery | Job Throughput | 250+ jobs/s |
| Raft-Recovery | Recovery Time | <3s with zero data loss |
| HiTrust | Pipeline Performance | 30% improvement |
| Portfolio | Monthly Cost | $0 infrastructure |
π Master of Science - Applied Computer Science
Fairleigh Dickinson University (2025-2027)
Focus: Artificial Intelligence, Advanced Operating Systems, Systems Programming
π Professional Certifications
- βοΈ AWS Certified Solutions Architect β Associate
- ποΈ HashiCorp Terraform Associate
π Specialized Training
- Big Data Analytics Bootcamp - Institute for Information Industry (2017-2018)
- Focus: Data analytics, machine learning, big data technologies
- π Advanced model monitoring and drift detection algorithms
- π MLflow & DVC for complete ML lifecycle management
- βοΈ AWS SageMaker for enterprise ML at scale
- ποΈ Feature stores and data versioning best practices
- π§ Kubeflow and ML on Kubernetes
I'm actively seeking MLOps Engineer and ML Infrastructure Engineer roles where I can:
| ποΈ Design & Build | π Deploy & Scale | π Optimize & Monitor |
|---|---|---|
| Scalable ML infrastructure | Production ML systems | ML workflows & costs |
| Distributed training systems | High-reliability services | Model performance |
| Feature stores & pipelines | CI/CD automation | System observability |
π§ Email: liyu.chu.work@gmail.com
π Portfolio: luichu.dev
πΌ LinkedIn: linkedin.com/in/chuliyu
π Location: Vancouver, BC, Canada π¨π¦
"Building reliable ML infrastructure that scales from prototype to production with zero downtime and measurable business impact."
Open to opportunities in:
- MLOps Engineering
- ML Infrastructure Engineering
- Production ML Systems
- Cloud ML Architecture
- DevOps for ML
