Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiModal.HuggingFaceMultiModal: fix errors and README, add stream_complete #16376

Merged
merged 2 commits into from
Oct 8, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
linting
  • Loading branch information
logan-markewich committed Oct 8, 2024
commit adbd47a4b0c87c03d77d5532af3243228a850ba4
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ print(response.text)
```

### Streaming

```python
from llama_index.multi_modal_llms.huggingface import HuggingFaceMultiModal
from llama_index.core.schema import ImageDocument
Expand All @@ -64,13 +65,18 @@ prompt = "Describe this image in detail."

import nest_asyncio
import asyncio

nest_asyncio.apply()


async def stream_output():
for chunk in model.stream_complete(prompt, image_documents=[image_document]):
print(chunk.delta, end='', flush=True)
for chunk in model.stream_complete(
prompt, image_documents=[image_document]
):
print(chunk.delta, end="", flush=True)
await asyncio.sleep(0)


asyncio.run(stream_output())
```

Expand Down
Loading