Skip to content

Commit ef2b191

Browse files
committed
update readme
1 parent 96b9bca commit ef2b191

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

README.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525

2626
## News <!-- omit in toc -->
2727

28+
<!-- * [2024.05.22] We further improved the inference efficiency on edge-side devices, providing a speed of 6-8 tokens/s, try it now! -->
2829
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first edge-side multimodal LLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md), try it now!
2930
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
3031
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
@@ -462,7 +463,7 @@ pip install -r requirements.txt
462463
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
463464

464465
### Multi-turn Conversation
465-
Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
466+
Please refer to the following codes to run `MiniCPM-V`
466467

467468
<div align="center">
468469
<img src="assets/airplane.jpeg" width="500px">
@@ -618,9 +619,9 @@ Please contact cpm@modelbest.cn to obtain written authorization for commercial u
618619

619620
## Statement <!-- omit in toc -->
620621

621-
As LMMs, OmniLMMs generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMMs does not represent the views and positions of the model developers
622+
As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers
622623

623-
We will not be liable for any problems arising from the use of OmniLMM open source models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
624+
We will not be liable for any problems arising from the use of MiniCPMV-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
624625

625626

626627
## Institutions <!-- omit in toc -->

README_zh.md

+1
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828

2929
## 更新日志 <!-- omit in toc -->
3030

31+
<!-- * [2024.05.22] 我们进一步提升了端侧推理速度!实现了 6-8 tokens/s 的流畅体验,欢迎试用! -->
3132
* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)[简易微调](./finetune/readme.md)的支持,欢迎试用!
3233
* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验!
3334
* [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验!

0 commit comments

Comments
 (0)