forked from run-llama/llama_index
-
Notifications
You must be signed in to change notification settings - Fork 0
Update for GPT4o #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Update for GPT4o #10
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…lama#12967) * fix sparse query pinecone * none checking
* lats agent worker package * pants * add README * rm lock file * update example nb
* Adding claude 3 opus to BedRock integration * Increment version
…ss in OpenAIPydanticProgram (run-llama#13021) * added function to parse partial JSON * Added stream_partial_objects method in OpenAIPydanticProgram to stream intermediate objects as soon as they're available * updated documentation to add a section on how to stream intermediate objects using OpenAIPydanticProgram * version bump for llama-index-program-openai package * Importing ValidationError from llama_index.core.bridge.pydantic instead * Moved attribution inside docstring * Raising ValueError instead of returning None for malformted partial JSON string * Added unit tests for parse_partial_json function * removed commented code
* Add phi-3 benchmarks * Update with prompt template
* fix: qdrant sparse vector backwards compatibility * bump version
Fix nonetype not iterable
Was missed in previous PR.
…ama#13448) * fix: Corrected connection parameters in connections.connect() * bump the version for bug fix
…ain evaluation schema and dashboard (run-llama#13479)
…ostgres or lantern vector-store integration run-llama#9522 (run-llama#13476)
* add nb for gpt4o mm program * rm image documents from construction * actually rm image documents from construction
Fix nonyielding stream chat bug Co-authored-by: Logan Markewich <logan.markewich@live.com>
* fix annoying grammar errors * add colab badge
* Update kuzu version for new storage layer * bump version --------- Co-authored-by: Andrei Fajardo <92402603+nerdai@users.noreply.github.com>
…llm` with Intel GPU supports (run-llama#13511)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
New Package?
Did I fill in the
tool.llamahubsection in thepyproject.tomland provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.tomlfile of the package I am updating? (Except for thellama-index-corepackage)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Suggested Checklist:
make format; make lintto appease the lint gods