Skip to content

Commit 76a629a

Browse files
committed
Update blog
1 parent 3ebcfdf commit 76a629a

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
title: "Post from Nov 05, 2025"
3+
date: 2025-11-05T09:47:33
4+
slug: "1762336053"
5+
tags:
6+
- easydiffusion
7+
- sdkit
8+
---
9+
10+
Following up to the [deep-dive on ML compilers](https://cmdr2.github.io/notes/2025/11/1762335811/):
11+
12+
sdkit v3 won't use general-purpose ML compilers. They aren't yet ready for sdkit's target platforms, and need a lot of work (well beyond sdkit v3's scope). But I'm quite certain that sdkit v4 will use them, and sdkit v3 will start making steps in that direction.
13+
14+
For sdkit v3, I see two possible paths:
15+
1. Use an array of vendor-specific compilers (like TensorRT-RTX, MiGraphX, OpenVINO etc), one for each target platform.
16+
2. Auto-generate ggml code from onnx (or pytorch), and beat it on the head until it meets sdkit v3's [performance goals](https://cmdr2.github.io/notes/2025/10/1760085894/). Hand-tune kernels, contribute to ggml, and take advantage of ggml's multi-backend kernels.
17+
18+
Both approaches provide a big step-up from sdkit v2 in terms of install size and performance. So it makes sense to tap into these first, and leave ML compilers for v4 (as another leap forward).

0 commit comments

Comments
 (0)