Skip to content

Commit

Permalink
Improve language and use collapsable toggle to avoid displaying long …
Browse files Browse the repository at this point in the history
…code snippets in readme
  • Loading branch information
haotian-liu authored Nov 4, 2023
1 parent 4944a7d commit 6e63eef
Showing 1 changed file with 12 additions and 9 deletions.
21 changes: 12 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@

## Contents
- [Install](#install)
- [Quick Start With HuggingFace](#quick-start-with-huggingface)
- [LLaVA Weights](#llava-weights)
- [Demo](#Demo)
- [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)
Expand Down Expand Up @@ -73,15 +72,16 @@ pip uninstall transformers
pip install -e .
```

## Quick Start With HuggingFace
You can make a quick start with our model on HugginFace.
Look the detail in "llava/model/builder.py" and "load_pretrained_model" function.
Here is a example code:
### Quick Start With HuggingFace

<details>
<summary>Example Code</summary>

```Python
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path

model_path = "liuhaotian/LLaVA-Lightning-MPT-7B-preview"
model_path = "liuhaotian/llava-v1.5-7b"
model_name = get_model_name_from_path(model_path)
model_base = None

Expand All @@ -92,12 +92,14 @@ tokenizer, model, image_processor, context_len = load_pretrained_model(
)
```

You can also use our "eval_model" function in "llava > eval > run_llava.py" to get the output easily. By doing so, you can use this code on Colab directly after downloading this repository.
Check out the details wth the `load_pretrained_model` function in `llava/model/builder.py`.

You can also use the `eval_model` function in `llava/eval/run_llava.py` to get the output easily. By doing so, you can use this code on Colab directly after downloading this repository.

``` python
# import the file

model_path = "liuhaotian/LLaVA-Lightning-MPT-7B-preview"
model_path = "liuhaotian/llava-v1.5-7b"
model_name = get_model_name_from_path(model_path)
model_base = None
prompt = "Give me a short description of this image."
Expand All @@ -114,7 +116,8 @@ args = type('Args', (), {

output = eval_model(args)
print(output)
```
```
</details>

## LLaVA Weights
Please check out our [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) for all public LLaVA checkpoints, and the instructions of how to use the weights.
Expand Down

0 comments on commit 6e63eef

Please sign in to comment.