-
Notifications
You must be signed in to change notification settings - Fork 20.1k
community: update documentation and model IDs for FriendliAI provider #28984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Remove unused models, fix streaming, update docs Remove llama-2-13b-chat, mixtral-8x7b-instruct-v0-1; fix llm friendli streaming; update docs examples Delete duplicate documents Remove llama-2-13b-chat, mixtral-8x7b-instruct-v0-1; fix llm friendli streaming; update docs examples Add example fix: update model version to meta-llama-3.1-8b-instruct in Friendli integration fix: simplify ValueError messages in Friendli model parameters refactor: streamline formatting in _stream_response_to_generation_chunk test: Refactor async stream test for Friendli to use mock choices test: Update Friendli async stream test to use correct mock choices test: Update Friendli stream tests to use choices for assertions test: Update Friendli async stream test to use AsyncMock for better async handling test: Update Friendli stream tests to use mock choices for consistency test: Update Friendli stream tests to assert chunk values directly for clarity test: Refactor Friendli stream response handling and update async test mocks refactor: Change type hint for stream_response parameter to Any for flexibility docs: Update Friendli notebook to correct execution counts and links
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
| # generation_info=dict( | ||
| # finish_reason=stream_response.choices[0].get("finish_reason", None), | ||
| # logprobs=stream_response.choices[0].get("logprobs", None), | ||
| # ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # generation_info=dict( | |
| # finish_reason=stream_response.choices[0].get("finish_reason", None), | |
| # logprobs=stream_response.choices[0].get("logprobs", None), | |
| # ), |
| ], | ||
| "source": ["llm.generate([\"Tell me a joke.\", \"Tell me a joke.\"])"] | ||
| "source": [ | ||
| "llm.generate([\"Tell me a joke.\", \"Tell me a joke.\"])" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think we need to show .generate
|
|
||
| Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, | ||
| and set it as the `FRIENDLI_TOKEN` environment variable. | ||
| and set it as the `FRIENDLI_TOKEN` environment variabzle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| and set it as the `FRIENDLI_TOKEN` environment variabzle. | |
| and set it as the `FRIENDLI_TOKEN` environment variable. |
|
Would you be interested in publishing an OSS integration package (e.g., https://python.langchain.com/docs/contributing/how_to/integrations/ We are encouraging contributors of LangChain integrations to go this route. This way we don't have to be in the loop for reviews, you're able to properly integration test the model, and you have control over versioning. Docs would continue to be maintained in the Let me know what you think! |
…langchain-ai#28984) ### Description - In the example, remove `llama-2-13b-chat`, `mixtral-8x7b-instruct-v0-1`. - Fix llm friendli streaming implementation. - Update examples in documentation and remove duplicates. ### Issue N/A ### Dependencies None ### Twitter handle `@friendliai`
Description
llama-2-13b-chat,mixtral-8x7b-instruct-v0-1.Issue
N/A
Dependencies
None
Twitter handle
@friendliai