Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update npm package llamaindex to v0.6.22 #5154

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hash-worker[bot]
Copy link
Contributor

@hash-worker hash-worker bot commented Sep 15, 2024

This PR contains the following updates:

Package Type Update Change Pending
llamaindex (source) dependencies minor 0.2.10 -> 0.6.22 0.7.0

Release Notes

run-llama/LlamaIndexTS (llamaindex)

v0.6.22

Compare Source

Patch Changes
  • 5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads

v0.6.21

Compare Source

Patch Changes
  • 6f75306: feat: support metadata filters for AstraDB
  • 94cb4ad: feat: Add metadata filters to ChromaDb and update to 1.9.2

v0.6.20

Compare Source

Patch Changes

v0.6.19

Compare Source

Patch Changes
  • 62cba52: Add ensureIndex function to LlamaCloudIndex
  • d265e96: fix: ignore resolving unpdf for nextjs
  • d30bbf7: Convert undefined values to null in LlamaCloud filters
  • 53fd00a: Fix getPipelineId in LlamaCloudIndex

v0.6.18

Compare Source

Patch Changes

v0.6.17

Compare Source

Patch Changes

v0.6.16

Compare Source

Patch Changes

v0.6.15

Compare Source

Patch Changes

v0.6.14

Compare Source

Patch Changes

v0.6.13

Compare Source

Patch Changes

v0.6.12

Compare Source

Patch Changes
  • f7b4e94: feat: add filters for pinecone
  • 78037a6: fix: bypass service context embed model
  • 1d9e3b1: fix: export llama reader in non-nodejs runtime

v0.6.11

Compare Source

Patch Changes

v0.6.10

Compare Source

Patch Changes

v0.6.9

Compare Source

Patch Changes

v0.6.8

Compare Source

Patch Changes

v0.6.7

Compare Source

Patch Changes
  • 23bcc37: fix: add serializer in doc store

    PostgresDocumentStore now will not use JSON.stringify for better performance

v0.6.6

Compare Source

Patch Changes

v0.6.5

Compare Source

Patch Changes
  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

v0.6.4

Compare Source

Patch Changes

v0.6.3

Compare Source

Patch Changes

v0.6.2

Compare Source

Patch Changes
  • 5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads

v0.6.1

Compare Source

Patch Changes
  • 62cba52: Add ensureIndex function to LlamaCloudIndex
  • d265e96: fix: ignore resolving unpdf for nextjs
  • d30bbf7: Convert undefined values to null in LlamaCloud filters
  • 53fd00a: Fix getPipelineId in LlamaCloudIndex

v0.6.0

Compare Source

Minor Changes
Patch Changes

v0.5.27

Compare Source

Patch Changes
  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

v0.5.26

Compare Source

Patch Changes
  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

v0.5.25

Compare Source

Patch Changes
  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

v0.5.24

Compare Source

Patch Changes

v0.5.23

Compare Source

Patch Changes

v0.5.22

Compare Source

Patch Changes

v0.5.21

Compare Source

Patch Changes
  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

v0.5.20

Compare Source

Patch Changes
  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

v0.5.19

Compare Source

Patch Changes
  • fcbf183: implement llamacloud file service

v0.5.18

Compare Source

Patch Changes

v0.5.17

Compare Source

Patch Changes
  • c654398: Implement Weaviate Vector Store in TS

v0.5.16

Compare Source

Patch Changes

v0.5.15

Compare Source

Patch Changes
  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

v0.5.14

Compare Source

Patch Changes
  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

v0.5.13

Compare Source

Patch Changes

v0.5.12

Compare Source

Patch Changes

v0.5.11

Compare Source

Patch Changes

v0.5.10

Compare Source

Patch Changes

v0.5.9

Compare Source

Patch Changes
  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

v0.5.8

Compare Source

Patch Changes
  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

v0.5.7

Compare Source

Patch Changes
  • ec59acd: fix: bundling issue with pnpm

v0.5.6

Compare Source

Patch Changes

v0.5.5

Compare Source

Patch Changes

v0.5.4

Compare Source

Patch Changes
  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

v0.5.3

Compare Source

Patch Changes

v0.5.2

Compare Source

Patch Changes
  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

v0.5.1

Compare Source

Patch Changes
  • fcbf183: implement llamacloud file service

v0.5.0

Compare Source

Minor Changes
  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

v0.4.14

Compare Source

Patch Changes

v0.4.13

Compare Source

Patch Changes
  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

v0.4.12

Compare Source

Patch Changes

v0.4.11

Compare Source

Patch Changes
  • 8bf5b4a: fix: llama parse input spreadsheet

v0.4.10

Compare Source

Patch Changes
  • 7dce3d2: fix: disable External Filters for Gemini

v0.4.9

Compare Source

Patch Changes
  • 3a96a48: fix: anthroipic image input

v0.4.8

Compare Source

Patch Changes
  • 83ebdfb: fix: next.js build error

v0.4.6

Compare Source

Patch Changes
  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

v0.4.5

Compare Source

Patch Changes
  • 6c3e5d0: fix: switch to correct reference for a static function

v0.4.4

Compare Source

Patch Changes
  • 42eb73a: Fix IngestionPipeline not working without vectorStores

v0.4.3

Compare Source

Patch Changes

v0.4.1

Compare Source

Patch Changes

v0.4.0

Compare Source

Minor Changes
  • 436bc41: Unify chat engine response and agent response
Patch Changes
  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

v0.3.17

Compare Source

Patch Changes
  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

v0.3.16

Compare Source

Patch Changes
  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

v0.3.15

Compare Source

Patch Changes
  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

v0.3.14

Compare Source

Patch Changes
  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

v0.3.13

Compare Source

Patch Changes
  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

v0.3.12

Compare Source

Patch Changes

v0.3.11

Compare Source

Patch Changes
  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

v0.3.10

Compare Source

Patch Changes

v0.3.9

Compare Source

Patch Changes
  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

v0.3.8

Compare Source

Patch Changes
  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

v0.3.7

Compare Source

Patch Changes
  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

v0.3.6

Compare Source

Patch Changes

v0.3.5

Compare Source

Patch Changes

v0.3.4

Compare Source

Patch Changes
  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

v0.3.3

Compare Source

Patch Changes
  • e8c41c5: fix: wrong gemini streaming chat response

v0.3.2

Compare Source

Patch Changes
  • 61103b6: fix: streaming for Agent.createTask API

v0.3.1

Compare Source

Patch Changes
  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

v0.3.0

Compare Source

Minor Changes
  • 5016f21: feat: improve next.js/cloudflare/vite support
Patch Changes
  • Updated dependencies [5016f21]
    • [@​lla

Configuration

📅 Schedule: Branch creation - "before 4am every weekday,every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

hashdotai
hashdotai previously approved these changes Sep 15, 2024
hashdotai
hashdotai previously approved these changes Sep 15, 2024
hashdotai
hashdotai previously approved these changes Sep 15, 2024
@hash-worker hash-worker bot changed the title Update npm package llamaindex to v0.5.27 Update npm package llamaindex to v0.6.0 Sep 16, 2024
hashdotai
hashdotai previously approved these changes Sep 16, 2024
@hash-worker hash-worker bot changed the title Update npm package llamaindex to v0.6.18 Update npm package llamaindex to v0.6.19 Oct 13, 2024
hashdotai
hashdotai previously approved these changes Oct 13, 2024
hashdotai
hashdotai previously approved these changes Oct 14, 2024
hashdotai
hashdotai previously approved these changes Oct 19, 2024
Copy link
Contributor

Benchmark results

@rust/graph-benches – Integrations

representative_read_entity_type

Function Value Mean Flame graphs
get_entity_type_by_id Account ID: d4e16033-c281-4cde-aa35-9085bf2e7579 $$1.40 \mathrm{ms} \pm 4.08 \mathrm{μs}\left({\color{gray}0.325 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_complete_zero_depth

Function Value Mean Flame graphs
entity_by_id 10 entities $$2.01 \mathrm{ms} \pm 11.0 \mathrm{μs}\left({\color{gray}-1.902 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 50 entities $$4.07 \mathrm{ms} \pm 32.6 \mathrm{μs}\left({\color{gray}2.22 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1 entities $$1.84 \mathrm{ms} \pm 12.2 \mathrm{μs}\left({\color{gray}-0.891 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 5 entities $$1.89 \mathrm{ms} \pm 10.3 \mathrm{μs}\left({\color{gray}0.972 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 25 entities $$2.44 \mathrm{ms} \pm 11.3 \mathrm{μs}\left({\color{lightgreen}-19.502 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_linkless

Function Value Mean Flame graphs
entity_by_id 10 entities $$1.84 \mathrm{ms} \pm 8.18 \mathrm{μs}\left({\color{gray}0.369 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1000 entities $$2.83 \mathrm{ms} \pm 31.9 \mathrm{μs}\left({\color{gray}-1.633 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1 entities $$1.87 \mathrm{ms} \pm 6.74 \mathrm{μs}\left({\color{gray}0.407 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 10000 entities $$12.4 \mathrm{ms} \pm 140 \mathrm{μs}\left({\color{gray}-1.135 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 100 entities $$2.04 \mathrm{ms} \pm 11.5 \mathrm{μs}\left({\color{gray}1.96 \mathrm{\%}}\right) $$ Flame Graph

representative_read_multiple_entities

Function Value Mean Flame graphs
entity_by_property depths: DT=255, PT=255, ET=255, E=255 $$69.3 \mathrm{ms} \pm 426 \mathrm{μs}\left({\color{gray}1.59 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=2, ET=2, E=2 $$54.9 \mathrm{ms} \pm 202 \mathrm{μs}\left({\color{gray}0.329 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=0, E=0 $$40.1 \mathrm{ms} \pm 242 \mathrm{μs}\left({\color{gray}0.057 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=2, E=2 $$50.8 \mathrm{ms} \pm 341 \mathrm{μs}\left({\color{gray}1.04 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=0, PT=0, ET=0, E=2 $$43.8 \mathrm{ms} \pm 248 \mathrm{μs}\left({\color{gray}-1.414 \mathrm{\%}}\right) $$ Flame Graph
entity_by_property depths: DT=2, PT=2, ET=2, E=2 $$60.1 \mathrm{ms} \pm 348 \mathrm{μs}\left({\color{gray}1.49 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=255, PT=255, ET=255, E=255 $$108 \mathrm{ms} \pm 754 \mathrm{μs}\left({\color{gray}0.834 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=2, ET=2, E=2 $$94.5 \mathrm{ms} \pm 460 \mathrm{μs}\left({\color{gray}0.438 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=0, E=0 $$42.8 \mathrm{ms} \pm 268 \mathrm{μs}\left({\color{gray}0.632 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=2, E=2 $$90.6 \mathrm{ms} \pm 547 \mathrm{μs}\left({\color{gray}0.551 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=0, PT=0, ET=0, E=2 $$80.7 \mathrm{ms} \pm 519 \mathrm{μs}\left({\color{gray}0.923 \mathrm{\%}}\right) $$ Flame Graph
link_by_source_by_property depths: DT=2, PT=2, ET=2, E=2 $$98.7 \mathrm{ms} \pm 456 \mathrm{μs}\left({\color{gray}0.094 \mathrm{\%}}\right) $$ Flame Graph

scaling_read_entity_complete_one_depth

Function Value Mean Flame graphs
entity_by_id 10 entities $$51.1 \mathrm{ms} \pm 228 \mathrm{μs}\left({\color{gray}-1.298 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 50 entities $$271 \mathrm{ms} \pm 1.51 \mathrm{ms}\left({\color{gray}-0.070 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 1 entities $$19.8 \mathrm{ms} \pm 93.0 \mathrm{μs}\left({\color{gray}-0.942 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 5 entities $$24.8 \mathrm{ms} \pm 201 \mathrm{μs}\left({\color{gray}1.82 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id 25 entities $$72.2 \mathrm{ms} \pm 368 \mathrm{μs}\left({\color{gray}-0.114 \mathrm{\%}}\right) $$ Flame Graph

representative_read_entity

Function Value Mean Flame graphs
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/uk-address/v/1 $$16.3 \mathrm{ms} \pm 241 \mathrm{μs}\left({\color{gray}0.453 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/page/v/2 $$15.8 \mathrm{ms} \pm 199 \mathrm{μs}\left({\color{lightgreen}-8.445 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/organization/v/1 $$15.3 \mathrm{ms} \pm 183 \mathrm{μs}\left({\color{lightgreen}-7.095 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/building/v/1 $$15.8 \mathrm{ms} \pm 178 \mathrm{μs}\left({\color{gray}-1.783 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/person/v/1 $$16.7 \mathrm{ms} \pm 177 \mathrm{μs}\left({\color{lightgreen}-31.059 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/song/v/1 $$16.4 \mathrm{ms} \pm 193 \mathrm{μs}\left({\color{gray}1.13 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/block/v/1 $$16.2 \mathrm{ms} \pm 182 \mathrm{μs}\left({\color{red}21.3 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/book/v/1 $$16.2 \mathrm{ms} \pm 172 \mathrm{μs}\left({\color{lightgreen}-10.573 \mathrm{\%}}\right) $$ Flame Graph
entity_by_id entity type ID: https://blockprotocol.org/@alice/types/entity-type/playlist/v/1 $$16.0 \mathrm{ms} \pm 174 \mathrm{μs}\left({\color{gray}3.46 \mathrm{\%}}\right) $$ Flame Graph

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apps > hash* Affects HASH (a `hash-*` app) area/apps area/deps Relates to third-party dependencies (area)
Development

Successfully merging this pull request may close these issues.

2 participants