Skip to content

Commit aa3468e

Browse files
committed
Update blog
1 parent 7df337c commit aa3468e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

content/blog/2025-10-27-1761560082.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ tags:
88
- sdkit
99
---
1010

11-
As a note to myself, a possible intuition for understanding GPU memory hierarchy (and the performance penalty for data transfer between various layers) is to think of it like a manufacturing logistics problem:
11+
A possible intuition for understanding GPU memory hierarchy (and the performance penalty for data transfer between various layers) is to think of it like a manufacturing logistics problem:
1212
1. CPU (host) to GPU (device) is like travelling overnight between two cities. The CPU city is like the "headquarters", and contains a mega-sized warehouse of parts (think football field sizes), also known as 'Host memory'.
1313
2. Each GPU is like a different city, containing its own warehouse outside the city, also known as 'Global Memory'. This warehouse stockpiles whatever it needs from the headquarters city (CPU).
1414
3. Each SM/Core/Tile is a factory located in different areas of the city. Each factory contains a small warehouse (shed) for stockpiling whatever inventory it needs, also known as 'Shared Memory'.
1515
4. Each warp is a bulk stamping machine inside the factory, producing 32 items in one shot. There's a tray next to each machine, also known as 'Registers'. This tray is used for keeping stuff temporarily for each stamping process.
1616

17-
This analogy helps me understand the scale and performance penalty for data transfers.
17+
This analogy can help understand the scale and performance penalty for data transfers.
1818

1919
For e.g. reading constantly from the Global Memory is like driving between the factory and the warehouse outside the city each time (with the traffic of city roads). This is much slower than going to the shed inside the factory (i.e. Shared Memory), and much much slower than just sticking your hand into the tray next to your stamping machine (i.e. Registers). And reading from the Host Memory (CPU) is like taking an overnight trip to another city.
2020

0 commit comments

Comments
 (0)