docs/integrations/chat/google_vertex_ai_palm/ #29619
Replies: 2 comments 2 replies
-
What format should I use to invoke for multimodal use cases? I used the following but did not work: model = primary_assistant_prompt | llm.bind_tools(tools) message = HumanMessage( |
Beta Was this translation helpful? Give feedback.
-
For images If it's an image url like this you can use If you want to process a local image do message = HumanMessage(
content=[
{
"type": "media",
"mime_type": "image/png", # or image/jpeg
"data": image_as_bytes
},
{
"type": "text",
"text": f"Please summarise this document"
},
],
) |
Beta Was this translation helpful? Give feedback.
-
docs/integrations/chat/google_vertex_ai_palm/
This page provides a quick overview for getting started with VertexAI chat models. For detailed documentation of all ChatVertexAI features and configurations head to the API reference.
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/
Beta Was this translation helpful? Give feedback.
All reactions