Allow images in the response #2045
Replies: 3 comments 3 replies
-
Currently we use |
Beta Was this translation helpful? Give feedback.
-
The answer is there but I'm going to leave this open because I think the general chains in Flowise should be able to give images back as well which I don't think they can do, thus it's still an open idea. |
Beta Was this translation helpful? Give feedback.
-
Just wanted to add my support for the need, as I would also like to achieve this... I have websites that are designed to help people find answers to questions on various different topics. My web pages contain text and images to help explain topics. Hence rather than a user having to read and search through the website I have created a RAG chatbot that is great at answering questions and providing citations to the most relevant web pages on my site. However, I don't know how to get the chatbot to include the images that are on my website next to the text the LLM used for its answer. I am thinking I need to provide the LLM a description of each image and what chunks of text the image relates to, but how do I add that info to the vector knowledge base, and then get the chatbot to include the image in its answers? I am not asking for it to interpret images, nor generate images, only provide the relevant existing images that are on my website as part of its answers. Hence, people wouldn't need to use the website anymore, as the chatbot would give them the information they want in a text and picture form. The website then essentially becomes just a structured knowledge store that is used for grounding the chatbot. |
Beta Was this translation helpful? Give feedback.
-
Currently the response of all LLMs is text-only. But it would be good if there was a way to include images in the responses if the URL is known, then the image could just be embedded in the response.
Beta Was this translation helpful? Give feedback.
All reactions