You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/2025-10-27-1761560082.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ This analogy can help understand the scale and performance penalty for data tran
19
19
For e.g. reading constantly from the Global Memory is like driving between the factory and the warehouse outside the city each time (with the traffic of city roads). This is much slower than going to the shed inside the factory (i.e. Shared Memory), and much much slower than just sticking your hand into the tray next to your stamping machine (i.e. Registers). And reading from the Host Memory (CPU) is like taking an overnight trip to another city.
20
20
21
21
Therefore the job of running a computation graph (like ONNX) efficiently on GPU(s) is like planning the logistics of a manufacturing company. You've got raw materials in the main warehouse that you need to transfer between cities, and store/process/transfer artifacts across different factories and machines. You need to make sure that:
22
-
- the production process follows the chart laid out in the computation graph.
22
+
- the production process follows the chart laid out in the computation graph
23
23
- every machine in each factory is being utilized optimally
24
24
- account for the time it takes to move things between cities/factories/machines
0 commit comments