diff --git a/README.md b/README.md
index 6939439..87de854 100644
--- a/README.md
+++ b/README.md
@@ -54,6 +54,7 @@ We have open-sourced Qwen2-VL models, including Qwen2-VL-2B and Qwen2-VL-7B unde
| Benchmark | Previous SoTA
(Open-source LVLM) | Claude-3.5 Sonnet | GPT-4o | **Qwen2-VL-72B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) [🤖](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct) |**Qwen2-VL-7B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) [🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct)) |**Qwen2-VL-2B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-2B-Instruct))
| :--- | :---: | :---: | :---: | :---: |:---: |:---: |
| MMMUval | 58.3 | 68.3 | **69.1** | 64.5 | 54.1|41.1
+| MMMU-Pro | 46.9 | 51.5 | **51.9** | 46.2 | 43.5 | 37.6
| DocVQAtest | 94.1 | 95.2 | 92.8 | **96.5** | 94.5| 90.1
| InfoVQAtest | 82.0 | - | - | **84.5** | 76.5|65.5
| ChartQAtest | 88.4 | **90.8** | 85.7 | 88.3 |83.0| 73.5
@@ -74,7 +75,6 @@ We have open-sourced Qwen2-VL models, including Qwen2-VL-2B and Qwen2-VL-7B unde
| MathVistatestmini | 67.5 | 67.7 | 63.8 | **70.5** |58.2| 43.0
| MathVision | 16.97 | - | **30.4** | 25.9 | 16.3| 12.4
-
### Video Benchmarks
| Benchmark | Previous SoTA
(Open-source LVLM) | Gemini 1.5-Pro | GPT-4o | **Qwen2-VL-72B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) [🤖](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) |**Qwen2-VL-7B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) [🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct)) |**Qwen2-VL-2B**
([🤗](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-2B-Instruct))