You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It might be useful for this library to have some documentation about how it works internally.
I may have missed such documentation somewhere so I went ahead to debug the inference example using gpt-2 117M.
I have taken quite some machine learning courses and have done a few projects already. I think I know the math formula involved in transformers and GPT models. However, I always wondered how they work in reality. The best way for me is to read and understand source codes implementing these models.
I am a C/C++ programmer mostly. I am more comfortable to read C/C++ programs. So, recently I started to read, run, and debug ggml's gpt-2 inference example since ggml is entirely written in C and can run many transformer models on a laptop: https://github.com/ggerganov/ggml/tree/master/examples/gpt-2 . The famous llama.cpp is closely connected to this library. My experiment environment is a MacBook Pro laptop+ Visual Studio Code + cmake+ CodeLLDB (gdb does not work with my M2 chip), and GPT-2 117 M model.
load the model: ggml specific format using quantization.
create a compute graph from the loaded model. I will explain this graph later.
tokenized the prompt
using a loop to feed the prompt into the model, and generate a new token each iteration
Inside the loop, the prompt is fed into the model's compute graph
when the compute graph is walked through entirely, the last node stores the results to help choose the next new token
generate a new token using the top-K and top-P sampling algorithm
update the prompt to include the new token, the prompt will be used in the next iteration
The core computation is done using the compute graph.
all computations involved in a neural network/model's inference can be modeled by using some input vector/matrix to compute a resulting vector/matrix.
If we focus on each vector and matrix, we can model the computing as forward walking/updating a directed graph: each node of the graph is a tensor, representing a vector or matrix
Each node/tensor stores its value and pointers to relevant input nodes/tensors and operations. The result is written back to the current tensor.
The inference now becomes the walk of the graph from the beginning to the end, following the edges from one tensor to another, updating each tensor's value based on the inputs and operations.
ggml provides quite some tools to dump or visualize the compute graph, which helps debug the inference process. https://netron.app/ also can visualize common model files hosted on huggingface. I tried to upload huggingface GPT-2 model to netron. It is fascinating to view the compute graph of a transformer model.
ggml has many other advanced features including running computation on GPUs, using multi-threaded programming, and so on.
Even for a small model like GPT-2 117M, the compute graph is quite large (leaf nodes 188 + non-leaf nodes 487). I will need more time to go through the graph to have a deeper understanding of how all the math formula of transformers is implemented in a programming language.
I have tremendous respect for ggml/llama.cpp's author: Georgi Gerganov. What a genius to pull off some projects like this!
The text was updated successfully, but these errors were encountered:
@chunhualiao I think the best place to do something like documentation or explain to beginners would be in Discussions. You can follow my simple example of how to multiply two matrices #563 ; it's super easy to understand and will help you understand how ggml works.
It might be useful for this library to have some documentation about how it works internally.
I may have missed such documentation somewhere so I went ahead to debug the inference example using gpt-2 117M.
Some initial notes are put here https://gist.github.com/chunhualiao/8610c8a3afa3ef76c0174c57ff6e5339
This may be useful for beginners though it may contain errors of course.
A snapshot of my notes is copied below:
I have taken quite some machine learning courses and have done a few projects already. I think I know the math formula involved in transformers and GPT models. However, I always wondered how they work in reality. The best way for me is to read and understand source codes implementing these models.
I am a C/C++ programmer mostly. I am more comfortable to read C/C++ programs. So, recently I started to read, run, and debug ggml's gpt-2 inference example since ggml is entirely written in C and can run many transformer models on a laptop: https://github.com/ggerganov/ggml/tree/master/examples/gpt-2 . The famous llama.cpp is closely connected to this library. My experiment environment is a MacBook Pro laptop+ Visual Studio Code + cmake+ CodeLLDB (gdb does not work with my M2 chip), and GPT-2 117 M model.
Here is what I have learned so far:
The high-level main function has the following structure https://github.com/ggerganov/ggml/blob/master/examples/gpt-2/main-backend.cpp
The core computation is done using the compute graph.
ggml provides quite some tools to dump or visualize the compute graph, which helps debug the inference process. https://netron.app/ also can visualize common model files hosted on huggingface. I tried to upload huggingface GPT-2 model to netron. It is fascinating to view the compute graph of a transformer model.
ggml has many other advanced features including running computation on GPUs, using multi-threaded programming, and so on.
Even for a small model like GPT-2 117M, the compute graph is quite large (leaf nodes 188 + non-leaf nodes 487). I will need more time to go through the graph to have a deeper understanding of how all the math formula of transformers is implemented in a programming language.
I have tremendous respect for ggml/llama.cpp's author: Georgi Gerganov. What a genius to pull off some projects like this!
The text was updated successfully, but these errors were encountered: