Skip to content

Latest commit

 

History

History
36 lines (29 loc) · 2.26 KB

deep-learning-compiler.md

File metadata and controls

36 lines (29 loc) · 2.26 KB

Deep Learning Compiler

{% hint style="info" %} No active maintenance. {% endhint %}

System Architecture

  • MLIR: Scaling Compiler Infrastructure for Domain Specific Computation (CGO 2021) [Paper] [Homepage]
    • Google
  • TVM: An Automated End-to-End Optimizing Compiler for Deep Learning (OSDI 2018) [Paper] [Code] [Homepage]
    • UW & AWS & SJTU & UC Davis & Cornell

Tensor Program Generation

  • Cocktailer: Analyzing and Optimizing Dynamic Control Flow in Deep Learning (OSDI 2023) [Paper]
    • THU & MSRA
    • Co-optimize the execution of control flow and data flow.
  • Welder: Scheduling Deep Learning Memory Access via Tile-graph (OSDI 2023) [Paper]
    • PKU & MSRA
    • Optimize memory access.
  • Effectively Scheduling Computational Graphs of Deep Neural Networks toward Their Domain-Specific Accelerators (OSDI 2023) [Paper]
    • Stream Computing
    • GraphTurbo: Scheduler for DSA.
  • EINNET: Optimizing Tensor Programs with Derivation-Based Transformations (OSDI 2023) [Paper]
    • THU & CMU
    • Leverage transformations between general tensor algebra expressions.
  • AStitch: Enabling a New Multi-dimensional Optimization Space for Memory-Intensive ML Training and Inference on Modern SIMT Architectures (ASPLOS 2022) [Paper]
    • Alibaba
    • Memory-intensive operators.
  • Ansor: Generating High-Performance Tensor Programs for Deep Learning (OSDI 2020) [Paper]
    • UC Berkeley

Acronyms

  • DSA: Domain-Specific Architecture