Skip to content
View ChuLiYu's full-sized avatar

Highlights

  • Pro

Block or report ChuLiYu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ChuLiYu/README.md

πŸ‘‹ Hi, I'm Li-Yu Chu (Liyu)

πŸš€ MLOps Engineer | Production ML Infrastructure Specialist

Portfolio LinkedIn Email

πŸ“ Vancouver, BC, Canada πŸ‡¨πŸ‡¦ | πŸ’Ό Open to MLOps & ML Infrastructure opportunities


🎯 About Me

class MLOpsEngineer:
    def __init__(self):
        self.name = "Li-Yu Chu"
        self.role = "MLOps Engineer"
        self.location = "Vancouver, BC πŸ‡¨πŸ‡¦"
        self.experience = "2+ years production ML infrastructure"
        
        self.core_skills = [
            "Serverless ML Inference (<100ms latency)",
            "Distributed Systems & Fault Tolerance",
            "Infrastructure as Code (Terraform)",
            "Production AWS Architecture"
        ]
        
        self.certifications = [
            "βœ… AWS Solutions Architect Associate",
            "βœ… HashiCorp Terraform Associate"
        ]
    
    def current_focus(self):
        return [
            "Building end-to-end MLOps pipelines",
            "Optimizing model serving infrastructure",
            "Implementing production monitoring systems"
        ]
    
    def say_hi(self):
        print("πŸš€ Building scalable ML infrastructure that powers real-world applications!")

engineer = MLOpsEngineer()
engineer.say_hi()

Why work with me?

  • βœ… 2+ years building production ML systems at scale
  • βœ… Live projects with measurable impact (<100ms inference, $0 hosting costs)
  • βœ… Strong foundation in distributed systems and cloud architecture
  • βœ… Full-stack MLOps: from training orchestration to model monitoring

πŸš€ Featured Projects

οΏ½οΏ½ MLOps Portfolio - Complete ML Infrastructure

Live Demo: luichu.dev | Production-grade MLOps showcase

Highlights:

  • πŸ“Š Comprehensive portfolio demonstrating end-to-end ML infrastructure
  • 🎯 Real production systems with quantifiable metrics
  • πŸ“ˆ Cost-optimized architecture ($0/month hosting)
  • πŸ”§ Multi-environment setup with CI/CD automation

Impact: Portfolio designed to pass technical and HR interviews for MLOps roles


⚑ Chainy Backend - Serverless ML Infrastructure

Production AWS Lambda architecture for ML model serving

MLOps Features:

  • πŸš€ Sub-100ms latency - Optimized for real-time inference
  • πŸ“ˆ Auto-scaling - 0 to 1000+ req/s with no manual intervention
  • πŸ’° Cost-optimized - 90% cheaper than traditional EC2 hosting
  • πŸ”’ Enterprise security - WAF, IAM, JWT authentication
  • πŸ“Š Full observability - CloudWatch metrics, dashboards, alerts

Tech Stack: AWS Lambda, DynamoDB, Terraform, TypeScript, API Gateway
Use Cases: Model serving APIs, feature stores, real-time predictions


πŸ”„ Raft-Recovery - Distributed Job Orchestrator

Fault-tolerant job queue for mission-critical workloads

Production Features:

  • πŸ’Ύ Zero data loss - Write-Ahead Log ensures durability
  • ⚑ High throughput - 250+ jobs/second with concurrent processing
  • πŸ›‘οΈ Crash recovery - Sub-3s recovery time with snapshots
  • πŸ”§ Raft consensus - Distributed coordination and leader election
  • πŸ“Š Prometheus metrics - Production monitoring built-in

Tech Stack: Go, Raft Consensus, Write-Ahead Log, Distributed Systems
Use Cases: ML training orchestration, ETL pipelines, batch processing


πŸ”§ Technical Skills

MLOps & Cloud

AWS Terraform Docker Kubernetes

ML & Data

Python PyTorch MLflow FastAPI

Systems & DevOps

Go GitHub Actions PostgreSQL Git

MLOps Expertise:

  • 🎯 Model serving & deployment (Lambda, SageMaker, custom APIs)
  • πŸ“Š Experiment tracking (MLflow, DVC, model registry)
  • πŸ”„ CI/CD pipelines (GitHub Actions, automated testing)
  • πŸ“ˆ Monitoring & observability (CloudWatch, Prometheus, drift detection)
  • πŸ’° Cost optimization (serverless, right-sizing, budget alerts)

Cloud Services:

  • ☁️ AWS: Lambda, DynamoDB, S3, SageMaker, CloudWatch, Step Functions
  • πŸ—οΈ IaC: Terraform (multi-env), CloudFormation
  • πŸ” Security: IAM, WAF, Secrets Manager, KMS

πŸ“Š GitHub Stats

GitHub Stats Top Languages

πŸ’Ό Professional Experience Highlights

🏒 Software Engineer @ HiTrust, Inc. (Jan 2023 – Dec 2024)

  • Developed secure microservices handling millions of financial requests
  • Optimized data pipelines β†’ 30% performance improvement
  • Managed Kubernetes deployments for production services
  • Implemented monitoring infrastructure with comprehensive logging

πŸ”¬ Product Planner (Data & ML) @ Astra Technology (Oct 2018 – Dec 2019)

  • Built time-series prediction models using Python stack
  • Led Computer Vision PoC in collaboration with NTT Japan
  • Defined technical requirements for ML model production deployment

πŸ“ˆ Impact & Achievements

Project Metric Result
Chainy Inference Latency <100ms (p95)
Chainy Cost Reduction 90% vs EC2 hosting
Raft-Recovery Job Throughput 250+ jobs/s
Raft-Recovery Recovery Time <3s with zero data loss
HiTrust Pipeline Performance 30% improvement
Portfolio Monthly Cost $0 infrastructure

πŸŽ“ Education & Certifications

πŸŽ“ Master of Science - Applied Computer Science
Fairleigh Dickinson University (2025-2027)
Focus: Artificial Intelligence, Advanced Operating Systems, Systems Programming

πŸ† Professional Certifications

  • ☁️ AWS Certified Solutions Architect – Associate
  • πŸ—οΈ HashiCorp Terraform Associate

πŸ“š Specialized Training

  • Big Data Analytics Bootcamp - Institute for Information Industry (2017-2018)
  • Focus: Data analytics, machine learning, big data technologies

πŸ” Currently Learning

  • πŸš€ Advanced model monitoring and drift detection algorithms
  • πŸ“Š MLflow & DVC for complete ML lifecycle management
  • ☁️ AWS SageMaker for enterprise ML at scale
  • πŸ—„οΈ Feature stores and data versioning best practices
  • πŸ”§ Kubeflow and ML on Kubernetes

πŸ“« Let's Connect!

I'm actively seeking MLOps Engineer and ML Infrastructure Engineer roles where I can:

πŸ—οΈ Design & Build πŸš€ Deploy & Scale πŸ“ˆ Optimize & Monitor
Scalable ML infrastructure Production ML systems ML workflows & costs
Distributed training systems High-reliability services Model performance
Feature stores & pipelines CI/CD automation System observability

πŸ“§ Email: liyu.chu.work@gmail.com
πŸ”— Portfolio: luichu.dev
πŸ’Ό LinkedIn: linkedin.com/in/chuliyu
πŸ“ Location: Vancouver, BC, Canada πŸ‡¨πŸ‡¦


πŸ’‘ Engineering Philosophy

"Building reliable ML infrastructure that scales from prototype to production with zero downtime and measurable business impact."

Open to opportunities in:

  • MLOps Engineering
  • ML Infrastructure Engineering
  • Production ML Systems
  • Cloud ML Architecture
  • DevOps for ML

Profile Views GitHub followers GitHub stars

Thanks for visiting! ⭐️ Star my repos if you find them useful!

Pinned Loading

  1. mlops-portfolio mlops-portfolio Public

    πŸš€ Production MLOps Engineer Portfolio | Serverless ML Inference (AWS Lambda) | Distributed Training Orchestration | IaC with Terraform | Live Production Systems | Cost-Optimized Cloud Architecture

    HTML

  2. chainy-backend chainy-backend Public

    πŸš€ Production serverless ML infrastructure on AWS | Sub-100ms inference latency | Auto-scaling Lambda architecture | Terraform IaC | Enterprise security (WAF, IAM) | 90% cost reduction | Perfect for…

    JavaScript

  3. raft-recovery raft-recovery Public

    ⚑ High-performance distributed job queue with fault tolerance | Zero data loss with WAL | Sub-3s crash recovery | Raft consensus | Built in Go | Perfect for ML training orchestration & ETL pipelines

    Go