Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): Bump the pip group across 4 directories with 3 updates #8

Merged

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Aug 1, 2024

Bumps the pip group with 1 update in the /examples/chainlit directory: llama-index.
Bumps the pip group with 1 update in the /examples/functions directory: langchain.
Bumps the pip group with 2 updates in the /examples/langchain-chroma directory: llama-index and langchain.
Bumps the pip group with 1 update in the /examples/langchain/langchainpy-localai-example directory: aiohttp.

Updates llama-index from 0.10.56 to 0.10.58

Release notes

Sourced from llama-index's releases.

v0.10.58

No release notes provided.

2024-07-22 (v0.10.57)

llama-index-core [v0.10.57]

  • Add an optional parameter similarity_score to VectorContextRetrieve… (#14831)
  • add property extraction (using property names and optional descriptions) for KGs (#14707)
  • able to attach output classes to LLMs (#14747)
  • Add streaming for tool calling / structured extraction (#14759)
  • fix from removing private variables when copying/pickling (#14860)
  • Fix empty array being send to vector store in ingestion pipeline (#14859)
  • optimize ingestion pipeline deduping (#14858)
  • Add an optional parameter similarity_score to VectorContextRetriever (#14831)

llama-index-llms-azure-openai [0.1.10]

  • Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (#14869)

llama-index-llms-bedrock-converse [0.1.5]

  • feat: ✨ Implement async functionality in BedrockConverse (#14326)

llama-index-llms-langchain [0.3.0]

  • make some dependencies optional
  • bump langchain version in integration (#14879)

llama-index-llms-ollama [0.1.6]

  • Bugfix: ollama streaming response (#14830)

llama-index-multi-modal-llms-anthropic [0.1.5]

llama-index-readers-notion [0.1.10]

  • update notion reader to handle duplicate pages, database+page ids (#14861)

llama-index-vector-stores-milvus [0.1.21]

  • Implements delete_nodes() and clear() for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (#14800)

llama-index-vector-stores-mongodb [0.1.8]

  • MongoDB Atlas Vector Search: Enhanced Metadata Filtering (#14856)

llama-index-vector-stores-opensearch [0.1.13]

... (truncated)

Changelog

Sourced from llama-index's changelog.

llama-index-core [0.10.58]

  • Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (#14937)
  • Return proper tool outputs per agent step instead of all (#14885)
  • Minor bug fixes to async structured streaming (#14925)

llama-index-llms-fireworks [0.1.6]

  • fireworks ai llama3.1 support (#14914)

llama-index-multi-modal-llms-anthropic [0.1.6]

  • Add claude 3.5 sonnet to multi modal llms (#14932)

llama-index-retrievers-bm25 [0.2.1]

  • 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (#14933)

llama-index-retrievers-vertexai-search [0.1.0]

  • Llamaindex retriever for Vertex AI Search (#14913)

llama-index-vector-stores-deeplake [0.1.5]

  • Improved deeplake.get_nodes() performance (#14920)

llama-index-vector-stores-elasticsearch [0.2.3]

  • Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (#14918)

llama-index-vector-stores-lindorm [0.1.0]

  • Add vector store integration of lindorm (#14623)

llama-index-vector-stores-qdrant [0.2.14]

  • feat: allow to limit how many elements retrieve (qdrant) (#14904)

[2024-07-22]

llama-index-core [0.10.57]

  • Add an optional parameter similarity_score to VectorContextRetrieve… (#14831)
  • add property extraction (using property names and optional descriptions) for KGs (#14707)
  • able to attach output classes to LLMs (#14747)
  • Add streaming for tool calling / structured extraction (#14759)
  • fix from removing private variables when copying/pickling (#14860)
  • Fix empty array being send to vector store in ingestion pipeline (#14859)
  • optimize ingestion pipeline deduping (#14858)
  • Add an optional parameter similarity_score to VectorContextRetriever (#14831)

... (truncated)

Commits
  • d94e0b0 v0.10.58 (#14944)
  • f2a64e5 structured extraction docs + bug fixes (#14925)
  • 7e1b77f 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (#14933)
  • 3be95a2 Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (...
  • 3a35a6e Llamaindex retriever for Vertex AI Search (#14913)
  • 94e137e Add vector store integration of lindorm, including knn search, … (#14623)
  • 1d7f303 Add claude 3.5 sonnet to multi modal llms (#14932)
  • 479d72a feat: allow to limit how many elements retrieve (qdrant) (#14904)
  • 34dec27 Bugfix: Don't pass empty list of embeddings to elasticsearch store when using...
  • 3eea9f9 Improved deeplake.get_nodes() performance (#14920)
  • Additional commits viewable in compare view

Updates langchain from 0.2.10 to 0.2.11

Commits

Updates llama-index from 0.10.56 to 0.10.58

Release notes

Sourced from llama-index's releases.

v0.10.58

No release notes provided.

2024-07-22 (v0.10.57)

llama-index-core [v0.10.57]

  • Add an optional parameter similarity_score to VectorContextRetrieve… (#14831)
  • add property extraction (using property names and optional descriptions) for KGs (#14707)
  • able to attach output classes to LLMs (#14747)
  • Add streaming for tool calling / structured extraction (#14759)
  • fix from removing private variables when copying/pickling (#14860)
  • Fix empty array being send to vector store in ingestion pipeline (#14859)
  • optimize ingestion pipeline deduping (#14858)
  • Add an optional parameter similarity_score to VectorContextRetriever (#14831)

llama-index-llms-azure-openai [0.1.10]

  • Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (#14869)

llama-index-llms-bedrock-converse [0.1.5]

  • feat: ✨ Implement async functionality in BedrockConverse (#14326)

llama-index-llms-langchain [0.3.0]

  • make some dependencies optional
  • bump langchain version in integration (#14879)

llama-index-llms-ollama [0.1.6]

  • Bugfix: ollama streaming response (#14830)

llama-index-multi-modal-llms-anthropic [0.1.5]

llama-index-readers-notion [0.1.10]

  • update notion reader to handle duplicate pages, database+page ids (#14861)

llama-index-vector-stores-milvus [0.1.21]

  • Implements delete_nodes() and clear() for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (#14800)

llama-index-vector-stores-mongodb [0.1.8]

  • MongoDB Atlas Vector Search: Enhanced Metadata Filtering (#14856)

llama-index-vector-stores-opensearch [0.1.13]

... (truncated)

Changelog

Sourced from llama-index's changelog.

llama-index-core [0.10.58]

  • Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (#14937)
  • Return proper tool outputs per agent step instead of all (#14885)
  • Minor bug fixes to async structured streaming (#14925)

llama-index-llms-fireworks [0.1.6]

  • fireworks ai llama3.1 support (#14914)

llama-index-multi-modal-llms-anthropic [0.1.6]

  • Add claude 3.5 sonnet to multi modal llms (#14932)

llama-index-retrievers-bm25 [0.2.1]

  • 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (#14933)

llama-index-retrievers-vertexai-search [0.1.0]

  • Llamaindex retriever for Vertex AI Search (#14913)

llama-index-vector-stores-deeplake [0.1.5]

  • Improved deeplake.get_nodes() performance (#14920)

llama-index-vector-stores-elasticsearch [0.2.3]

  • Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (#14918)

llama-index-vector-stores-lindorm [0.1.0]

  • Add vector store integration of lindorm (#14623)

llama-index-vector-stores-qdrant [0.2.14]

  • feat: allow to limit how many elements retrieve (qdrant) (#14904)

[2024-07-22]

llama-index-core [0.10.57]

  • Add an optional parameter similarity_score to VectorContextRetrieve… (#14831)
  • add property extraction (using property names and optional descriptions) for KGs (#14707)
  • able to attach output classes to LLMs (#14747)
  • Add streaming for tool calling / structured extraction (#14759)
  • fix from removing private variables when copying/pickling (#14860)
  • Fix empty array being send to vector store in ingestion pipeline (#14859)
  • optimize ingestion pipeline deduping (#14858)
  • Add an optional parameter similarity_score to VectorContextRetriever (#14831)

... (truncated)

Commits
  • d94e0b0 v0.10.58 (#14944)
  • f2a64e5 structured extraction docs + bug fixes (#14925)
  • 7e1b77f 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (#14933)
  • 3be95a2 Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (...
  • 3a35a6e Llamaindex retriever for Vertex AI Search (#14913)
  • 94e137e Add vector store integration of lindorm, including knn search, … (#14623)
  • 1d7f303 Add claude 3.5 sonnet to multi modal llms (#14932)
  • 479d72a feat: allow to limit how many elements retrieve (qdrant) (#14904)
  • 34dec27 Bugfix: Don't pass empty list of embeddings to elasticsearch store when using...
  • 3eea9f9 Improved deeplake.get_nodes() performance (#14920)
  • Additional commits viewable in compare view

Updates langchain from 0.2.10 to 0.2.11

Commits

Updates aiohttp from 3.9.5 to 3.10.0

Release notes

Sourced from aiohttp's releases.

3.10.0

Bug fixes

  • Fixed server response headers for Content-Type and Content-Encoding for static compressed files -- by :user:steverep.

    Server will now respond with a Content-Type appropriate for the compressed file (e.g. "application/gzip"), and omit the Content-Encoding header. Users should expect that most clients will no longer decompress such responses by default.

    Related issues and pull requests on GitHub: #4462.

  • Fixed duplicate cookie expiration calls in the CookieJar implementation

    Related issues and pull requests on GitHub: #7784.

  • Adjusted FileResponse to check file existence and access when preparing the response -- by :user:steverep.

    The :py:class:~aiohttp.web.FileResponse class was modified to respond with 403 Forbidden or 404 Not Found as appropriate. Previously, it would cause a server error if the path did not exist or could not be accessed. Checks for existence, non-regular files, and permissions were expected to be done in the route handler. For static routes, this now permits a compressed file to exist without its uncompressed variant and still be served. In addition, this changes the response status for files without read permission to 403, and for non-regular files from 404 to 403 for consistency.

    Related issues and pull requests on GitHub: #8182.

  • Fixed AsyncResolver to match ThreadedResolver behavior -- by :user:bdraco.

    On system with IPv6 support, the :py:class:~aiohttp.resolver.AsyncResolver would not fallback to providing A records when AAAA records were not available. Additionally, unlike the :py:class:~aiohttp.resolver.ThreadedResolver, the :py:class:~aiohttp.resolver.AsyncResolver did not handle link-local addresses correctly.

... (truncated)

Changelog

Sourced from aiohttp's changelog.

3.10.0 (2024-07-30)

Bug fixes

  • Fixed server response headers for Content-Type and Content-Encoding for static compressed files -- by :user:steverep.

    Server will now respond with a Content-Type appropriate for the compressed file (e.g. "application/gzip"), and omit the Content-Encoding header. Users should expect that most clients will no longer decompress such responses by default.

    Related issues and pull requests on GitHub: :issue:4462.

  • Fixed duplicate cookie expiration calls in the CookieJar implementation

    Related issues and pull requests on GitHub: :issue:7784.

  • Adjusted FileResponse to check file existence and access when preparing the response -- by :user:steverep.

    The :py:class:~aiohttp.web.FileResponse class was modified to respond with 403 Forbidden or 404 Not Found as appropriate. Previously, it would cause a server error if the path did not exist or could not be accessed. Checks for existence, non-regular files, and permissions were expected to be done in the route handler. For static routes, this now permits a compressed file to exist without its uncompressed variant and still be served. In addition, this changes the response status for files without read permission to 403, and for non-regular files from 404 to 403 for consistency.

    Related issues and pull requests on GitHub: :issue:8182.

  • Fixed AsyncResolver to match ThreadedResolver behavior -- by :user:bdraco.

    On system with IPv6 support, the :py:class:~aiohttp.resolver.AsyncResolver would not fallback to providing A records when AAAA records were not available.

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps the pip group with 1 update in the /examples/chainlit directory: [llama-index](https://github.com/run-llama/llama_index).
Bumps the pip group with 1 update in the /examples/functions directory: [langchain](https://github.com/langchain-ai/langchain).
Bumps the pip group with 2 updates in the /examples/langchain-chroma directory: [llama-index](https://github.com/run-llama/llama_index) and [langchain](https://github.com/langchain-ai/langchain).
Bumps the pip group with 1 update in the /examples/langchain/langchainpy-localai-example directory: [aiohttp](https://github.com/aio-libs/aiohttp).


Updates `llama-index` from 0.10.56 to 0.10.58
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.56...v0.10.58)

Updates `langchain` from 0.2.10 to 0.2.11
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain==0.2.10...langchain==0.2.11)

Updates `llama-index` from 0.10.56 to 0.10.58
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.56...v0.10.58)

Updates `langchain` from 0.2.10 to 0.2.11
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain==0.2.10...langchain==0.2.11)

Updates `aiohttp` from 3.9.5 to 3.10.0
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](aio-libs/aiohttp@v3.9.5...v3.10.0)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: langchain
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: llama-index
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: langchain
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: aiohttp
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Aug 1, 2024
Copy link

squash-labs bot commented Aug 1, 2024

Manage this branch in Squash

Test this branch here: https://dependabotpipexampleschainlitp-ue0xh.squash.io

@github-actions github-actions bot merged commit c15cf5c into master Aug 1, 2024
23 of 25 checks passed
@dependabot dependabot bot deleted the dependabot/pip/examples/chainlit/pip-8b87fe4081 branch August 1, 2024 04:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants