Skip to content

Conversation

@minpeter
Copy link
Contributor

@minpeter minpeter commented Jan 2, 2025

Description

  • In the example, remove llama-2-13b-chat, mixtral-8x7b-instruct-v0-1.
  • Fix llm friendli streaming implementation.
  • Update examples in documentation and remove duplicates.

Issue

N/A

Dependencies

None

Twitter handle

@friendliai

Remove unused models, fix streaming, update docs

Remove llama-2-13b-chat, mixtral-8x7b-instruct-v0-1; fix llm friendli streaming; update docs examples

Delete duplicate documents

Remove llama-2-13b-chat, mixtral-8x7b-instruct-v0-1; fix llm friendli streaming; update docs examples

Add example

fix: update model version to meta-llama-3.1-8b-instruct in Friendli integration

fix: simplify ValueError messages in Friendli model parameters

refactor: streamline formatting in _stream_response_to_generation_chunk

test: Refactor async stream test for Friendli to use mock choices

test: Update Friendli async stream test to use correct mock choices

test: Update Friendli stream tests to use choices for assertions

test: Update Friendli async stream test to use AsyncMock for better async handling

test: Update Friendli stream tests to use mock choices for consistency

test: Update Friendli stream tests to assert chunk values directly for clarity

test: Refactor Friendli stream response handling and update async test mocks

refactor: Change type hint for stream_response parameter to Any for flexibility

docs: Update Friendli notebook to correct execution counts and links
@vercel
Copy link

vercel bot commented Jan 2, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 2, 2025 6:50am

@dosubot dosubot bot added size:L labels Jan 2, 2025
Comment on lines +27 to +30
# generation_info=dict(
# finish_reason=stream_response.choices[0].get("finish_reason", None),
# logprobs=stream_response.choices[0].get("logprobs", None),
# ),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# generation_info=dict(
# finish_reason=stream_response.choices[0].get("finish_reason", None),
# logprobs=stream_response.choices[0].get("logprobs", None),
# ),

],
"source": ["llm.generate([\"Tell me a joke.\", \"Tell me a joke.\"])"]
"source": [
"llm.generate([\"Tell me a joke.\", \"Tell me a joke.\"])"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think we need to show .generate


Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token,
and set it as the `FRIENDLI_TOKEN` environment variable.
and set it as the `FRIENDLI_TOKEN` environment variabzle.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
and set it as the `FRIENDLI_TOKEN` environment variabzle.
and set it as the `FRIENDLI_TOKEN` environment variable.

@dosubot dosubot bot added the lgtm label Jan 2, 2025
@ccurme
Copy link
Collaborator

ccurme commented Jan 2, 2025

Would you be interested in publishing an OSS integration package (e.g., langchain-friendli)? We've written a walkthrough on this process here:

https://python.langchain.com/docs/contributing/how_to/integrations/

We are encouraging contributors of LangChain integrations to go this route. This way we don't have to be in the loop for reviews, you're able to properly integration test the model, and you have control over versioning.

Docs would continue to be maintained in the langchain repo.

Let me know what you think!

@ccurme ccurme merged commit a873e0f into langchain-ai:master Jan 2, 2025
21 checks passed
pprados pushed a commit to pprados/langchain that referenced this pull request Jan 3, 2025
…langchain-ai#28984)

### Description  

- In the example, remove `llama-2-13b-chat`,
`mixtral-8x7b-instruct-v0-1`.
- Fix llm friendli streaming implementation.
- Update examples in documentation and remove duplicates.

### Issue  
N/A  

### Dependencies  
None  

### Twitter handle  
`@friendliai`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

No open projects
Archived in project

Development

Successfully merging this pull request may close these issues.

2 participants