Skip to content

Qengineering/Qwen2-VL-2B-NPU

Repository files navigation

Qwen2-VL-2.2B NPU

Alt text

User:<image>Describe the image.

Answer: The image depicts an astronaut on the moon, holding a green beer can in his right hand and relaxing with his feet up on a green box. The background shows the Earth and the vastness of space, with stars visible in the sky. The astronaut is wearing a white spacesuit and has a helmet on his head.


Qwen2-2.2B VLM for RK3588 NPU (Rock 5, Orange Pi 5).

License

Paper: Qwen2 Technical Report

Hugging face: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct


Introduction

LLMs (Large Language Models) are neural networks trained on large text datasets to understand and generate language.
VLMs (Vision-Language Models) add a visual encoder so the model can process images and text together.
A combined VLM+LLM system is often referred to as a multimodal model.

These models can be large—hundreds of millions to billions of parameters—which impacts accuracy, memory use, and runtime speed.
On edge devices like the RK3588, available RAM and compute are limited, and even the NPU has strict constraints on supported operations.
Because of this, models typically need to be quantised or simplified to fit.

Performance is usually expressed in tokens (words) per second.
Once converted to RKNN, parts of the model can run on the NPU, improving speed.
Despite these limits, models like Qwen2-2.2B run well on the RK3588 because the NPU efficiently accelerates the heavy math, and the vision encoder can be optimised. This makes advanced multimodal AI feasible on small, power-efficient devices.


Model performance benchmark (FPS)

All models, with C++ examples, can be found on the Q-engineering GitHub.

All LLM models are quantized to w8a8, while the VLM vision encoders use fp16.

model RAM (GB)1 llm cold sec2 llm warm sec3 vlm cold sec2 vlm warm sec3 Resolution Tokens/s
Qwen3-2B 3.1 21.9 2.6 10.0 0.9 448 x 448 11.5
Qwen3-4B 8.7 49.6 5.6 10.6 1.1 448 x 448 5.7
Qwen2.5-3B 4.8 48.3 4.0 17.9 1.8 392 x 392 7.0
Qwen2-7B 8.7 86.6 34.5 37.1 20.7 392 x 392 3.7
Qwen2-2.2B 3.3 29.1 2.5 17.1 1.7 392 x 392 12.5
InternVL3-1B 1.3 6.8 1.1 7.8 0.75 448 x 448 30
SmolVLM2-2.2B 3.4 21.2 2.6 10.5 0.9 384 x 384 11
SmolVLM2-500M 0.8 4.8 0.7 2.5 0.25 384 x 384 31
SmolVLM2-256M 0.5 1.1 0.4 2.5 0.25 384 x 384 54

1 The total used memory; LLM plus the VLM.
2 When an llm/vlm model is loaded for the first time from your disk to RAM or NPU, it is called a cold start.
The duration depends on your OS, I/O transfer rate, and memory mapping.
3 Subsequent loading (warm start) takes advantage of the already mapped data in RAM. Mostly, only a few pointers need to be restored.

Plot_1
Plot_2


Dependencies.

To run the application, you have to:

  • OpenCV 64-bit installed.
  • rkllm library.
  • rknn library.
  • Optional: Code::Blocks. ($ sudo apt-get install codeblocks)

Installing the dependencies.

Start with the usual

$ sudo apt-get update 
$ sudo apt-get upgrade
$ sudo apt-get install cmake wget curl

OpenCV

To install OpenCV on your SBC, follow the Raspberry Pi 4 guide.

Or, when you have no intentions to program code:

$ sudo apt-get install lib-opencv-dev 

Installing the app.

$ git clone https://github.com/Qengineering/Qwen2-VL-2B-NPU

RKLLM, RKNN

To run InternVL3, you need to have the rkllm-runtime library version 1.2.3 (or higher) installed, as well as the rknpu driver version 0.9.8.
If you don't have these on your machine, or if you have a lower version, you need to install them.
We have provided the correct versions in the repo.

$ cd ./Qwen2-VL-2B-NPU/aarch64/library
$ sudo cp ./*.so /usr/local/lib
$ cd ./Qwen2-VL-2B-NPU/aarch64/include
$ sudo cp ./*.h /usr/local/include

Download the LLM and VLM model.

The next step is downloading the models.
This time, we used the original model supplied by Rockchips rkllm_model_zoo (44 GB!), fetch code: rkllm.

Better to download the two needed files (3.2 GB) from our Sync.com server:
Qwen2-VL-2B_llm_w8a8_rk3588.rkllm and Qwen2-VL-2B_vision_rk3588.rknn
Copy both into this folder.

Building the app.

Once you have the two models, it is time to build your application.
You can use Code::Blocks.

  • Load the project file *.cbp in Code::Blocks.
  • Select Release, not Debug.
  • Compile and run with F9.
  • You can alter command line arguments with Project -> Set programs arguments...

Or use Cmake.

$ mkdir build
$ cd build
$ cmake ..
$ make -j4

Running the app.

The app has the following arguments.

VLM_NPU Picture RKNN_model RKLLM_model NewTokens ContextLength
Argument Comment
picture The image. Provide a dummy if you don't want to use an image
RKNN_model The visual encoder model (vlm)
RKLLM_model The large language model (llm)
NewTokens This sets the maximum number of new tokens. Optional, default 2048
ContextLength This specifies the maximum total number of tokens the model can process. Optional, default 4096


In the context of the Rockchip RK3588 LLM (Large Language Model) library, the parameters NewTokens and ContextLength both control different limits for text generation, and they're typical in LLM workflows.
NewTokens
This sets the maximum number of tokens (pieces of text, typically sub-word units) that the model is allowed to generate in response to a prompt during a single inference round. For example, if set to 300, the model will not return more than 300 tokens as output, regardless of the prompt length. It's important for controlling generation length to avoid too-short or too-long responses, helping manage resource use and output size.
ContextLength
This specifies the maximum total number of tokens the model can process in one go, which includes both the prompt (input) tokens and all generated tokens. For example, if set to 2048 and your prompt already uses 500 tokens, the model can generate up to 2048-500 = 1548 new tokens. This is a hardware and architecture constraint set during model conversion and deployment, as the context window cannot exceed the model's design limit (for instance, 4096 or 8192 tokens depending on the model variant).

A typical command line can be:

VLM_NPU ./Moon.jpg ./models/Qwen2-VL-2B_vision_rk3588.rknn ./models/Qwen2-VL-2B_llm_w8a8_rk3588.rkllm 2048 4096

The NewTokens (2048) and ContextLength (4096) are optional and can be omitted.

Using the app.

Using the application is simple. Once you provide the image and the models, you can ask everything you want.
Remember, we are on a bare Rock5C, so don't expect the same quality answers as ChatGPT can provide.
On the other hand, when you see the examples below, the app performs amazingly well.

If you want to talk about the picture, you need to include the token <image> in your prompt once.
The app remembers the dialogue until you give the token <clear>.
With <exit>, you leave the application.

C++ code.

Below, you find the surprisingly little code of main.cpp.

#include "RK35llm.h"

int main(int argc, char** argv)
{
    std::string input_str;
    std::string output_str;
    RK35llm RKLLM;

    RKLLM.SetInfo(true);            //yes, you may give me additional model information
    RKLLM.SetSilence(false);        //you may print the incremental text chunks on the terminal

    if     (argc< 4) {std::cerr << "Usage: " << argv[0] << " image vlm_model llm_model [option]NewTokens [option]ContextLength\n"; return -1;}
    else if(argc==4) RKLLM.LoadModel(argv[2],argv[3]);
    else if(argc==5) RKLLM.LoadModel(argv[2],argv[3],std::atoi(argv[4]));
    else if(argc> 5) RKLLM.LoadModel(argv[2],argv[3],std::atoi(argv[4]),std::atoi(argv[5]));

    cv::Mat Pic = cv::imread(argv[1]);
    RKLLM.LoadImage(Pic);

    while(true) {
        printf("\n");
        printf("User: ");

        std::getline(std::cin, input_str);
        if (input_str == "exit") break;

        output_str = RKLLM.Ask(input_str);
//        std::cout << "\nLLM Reply: " << output_str << std::endl;
    }

    return 0;
}

Most code speaks for itself. One remark.
The LLM generates the answer to your request in little pieces of text. You see them as if the LLM is typing on your terminal.
RKLLM.SetSilence controls this behaviour. When set to true, it will suppress this output. RKLLM.Ask() also returns the complete answer.
In main.cpp uncomment this line:

std::cout << "\nLLM Answer: " << output_str << std::endl;

VLM examples.

Image Dialog
Singapore_S User: <image>Which city is this?
Answer: This image depicts the skyline of Singapore, featuring the Marina Bay Sands and the Singapore Flyer Ferris wheel. The Marina Bay Sands is one of the largest hotels in the world, while the Singapore Flyer is a popular tourist attraction.
ChineseWall_S User: <image>Tell me something about the history of this place.
Answer: The Great Wall of China, also known as the Great Wall of China or simply the Great Wall, is an ancient fortification located in northern China. It was built over several centuries by various Chinese dynasties and empires to protect against invasions from nomadic tribes.

The construction of the Great Wall began during the Qin Dynasty (221-206 BC) and continued through the Han Dynasty (206 BC - 220 AD). The wall stretches for approximately 3,400 miles (5,478 km), with its length varying slightly depending on which section of the wall is considered. It was originally built as a defensive barrier against nomadic tribes from the north and east.

The Great Wall has been an important symbol of Chinese culture and history, and it remains one of the most iconic structures in the world. The wall's construction was not only a military effort but also a significant engineering feat that required massive human labor and resources. It played a crucial role in shaping China's political, economic, and cultural landscape for centuries.

Today, the Great Wall is a UNESCO World Heritage Site and attracts millions of visitors each year who come to admire its historical significance and beauty. Despite being largely abandoned and in disrepair during much of its history, the Great Wall continues to serve as a symbol of Chinese pride and resilience.
Pizza_S User: <image>What are the ingredients I need to make this?
Answer: To make a classic Margherita pizza, you will need the following ingredients:

1. Pizza dough: A good quality pizza dough is essential for a successful pizza.
2. Tomato sauce: Use fresh tomatoes and tomato paste or tomato sauce to create the tomato base of your pizza.
3. Mozzarella cheese: This is the main ingredient in a Margherita pizza, so make sure you have plenty of it.
4. Fresh basil leaves: These are used as a topping on the pizza for flavor and color.
5. Olive oil: A drizzle of olive oil can add extra flavor to your pizza.

You will also need some other ingredients such as salt, pepper, and any additional toppings like onions or bell peppers if you want them.

User: What is the first step?
Answer: The first step in making a Margherita pizza is to prepare the dough. You can use store-bought pizza dough or make your own from scratch. To do this, follow these steps:

1. Preheat your oven to 450°F (232°C).
2. Roll out the pizza dough on a floured surface until it's about 1/8 inch thick.
3. Place the dough onto a baking sheet or pizza stone and place in the preheated oven.
4. Bake for 10-12 minutes, or until the crust is golden brown and crispy.

Once the crust is baked, you can add your toppings and cook the pizza as desired.
Titanic_S User: <image>Which movie is this?
Answer: This image appears to be from the 2015 film "Titanic." The scene shows Jack (played by Leonardo DiCaprio) and Rose (played by Kate Winslet), who are on board the Titanic at sunset.

LLM example.

Notice the nice Markdown makeup of the answer!

User: Can you make me a C++ call that embeds the Pythagorean theorem?

Answer: Yes, I can certainly help with that. Here's an example of how to use the Pythagorean Theorem in C++:

#include <iostream>
using namespace std;

int main() {
    double side1 = 3;
    double side2 = 4;
    
    // Calculate the hypotenuse using the Pythagorean theorem
    double hypotenuse = sqrt(side1 * side1 + side2 * side2);
    
    cout << "The length of the hypotenuse is: " << hypotenuse << endl;
    
    return 0;
}

This code calculates the length of the hypotenuse using the Pythagorean theorem, which states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides. In this case, we're calculating the hypotenuse for a triangle with sides of 3 and 4 units long. I hope that helps! Let me know if you have any questions or need further assistance.


paypal