-
Notifications
You must be signed in to change notification settings - Fork 16.9k
Comparing changes
Open a pull request
base repository: langchain-ai/langchain
base: v0.0.337
head repository: langchain-ai/langchain
compare: v0.0.338
- 12 commits
- 27 files changed
- 10 contributors
Commits on Nov 17, 2023
-
Should be able to override the global key if you want to evaluate different outputs in a single run
Configuration menu - View commit details
-
Copy full SHA for 5a28dc3 - Browse repository at this point
Copy the full SHA 5a28dc3View commit details -
IMPROVEMENT Neptune graph updates (#13491)
## Description This PR adds an option to allow unsigned requests to the Neptune database when using the `NeptuneGraph` class. ```python graph = NeptuneGraph( host='<my-cluster>', port=8182, sign=False ) ``` Also, added is an option in the `NeptuneOpenCypherQAChain` to provide additional domain instructions to the graph query generation prompt. This will be injected in the prompt as-is, so you should include any provider specific tags, for example `<instructions>` or `<INSTR>`. ```python chain = NeptuneOpenCypherQAChain.from_llm( llm=llm, graph=graph, extra_instructions=""" Follow these instructions to build the query: 1. Countries contain airports, not the other way around 2. Use the airport code for identifying airports """ ) ```
Configuration menu - View commit details
-
Copy full SHA for d2335d0 - Browse repository at this point
Copy the full SHA d2335d0View commit details -
IMPROVEMENT WebResearchRetriever error handling in urls with connectiβ¦
β¦on error (#13401) - **Description:** Added a method `fetch_valid_documents` to `WebResearchRetriever` class that will test the connection for every url in `new_urls` and remove those that raise a `ConnectionError`. - **Issue:** [Previous PR](#13353), - **Dependencies:** None, - **Tag maintainer:** @efriis Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
Configuration menu - View commit details
-
Copy full SHA for 0fb5f85 - Browse repository at this point
Copy the full SHA 0fb5f85View commit details
Commits on Nov 18, 2023
-
And warn instead of raising an error, since the chain API is too inconsistent.
Configuration menu - View commit details
-
Copy full SHA for c56faa6 - Browse repository at this point
Copy the full SHA c56faa6View commit details -
EXPERIMENTAL Generic LLM wrapper to support chat model interface withβ¦
β¦ configurable chat prompt format (#8295) ## Update 2023-09-08 This PR now supports further models in addition to Lllama-2 chat models. See [this comment](#issuecomment-1668988543) for further details. The title of this PR has been updated accordingly. ## Original PR description This PR adds a generic `Llama2Chat` model, a wrapper for LLMs able to serve Llama-2 chat models (like `LlamaCPP`, `HuggingFaceTextGenInference`, ...). It implements `BaseChatModel`, converts a list of chat messages into the [required Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`. Usage example: ```python # uses a locally hosted Llama2 chat model llm = HuggingFaceTextGenInference( inference_server_url="http://127.0.0.1:8080/", max_new_tokens=512, top_k=50, temperature=0.1, repetition_penalty=1.03, ) # Wrap llm to support Llama2 chat prompt format. # Resulting model is a chat model model = Llama2Chat(llm=llm) messages = [ SystemMessage(content="You are a helpful assistant."), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{text}"), ] prompt = ChatPromptTemplate.from_messages(messages) memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) chain = LLMChain(llm=model, prompt=prompt, memory=memory) # use chat model in a conversation # ... ``` Also part of this PR are tests and a demo notebook. - Tag maintainer: @hwchase17 - Twitter handle: `@mrt1nz` --------- Co-authored-by: Erick Friis <erick@langchain.dev>
Configuration menu - View commit details
-
Copy full SHA for 79ed66f - Browse repository at this point
Copy the full SHA 79ed66fView commit details -
Configuration menu - View commit details
-
Copy full SHA for cac849a - Browse repository at this point
Copy the full SHA cac849aView commit details -
Fix typo/line break in the middle of a word (#13314)
- **Description:** a simple typo/extra line break fix - **Dependencies:** none
Configuration menu - View commit details
-
Copy full SHA for cda1b33 - Browse repository at this point
Copy the full SHA cda1b33View commit details -
IMPROVEMENT Adds support for new OctoAI endpoints (#13521)
small fix to add support for new OctoAI LLM endpoints
Configuration menu - View commit details
-
Copy full SHA for ff382b7 - Browse repository at this point
Copy the full SHA ff382b7View commit details -
BUG fixed
openai_assistant
namespace (#13543)BUG: langchain.agents.openai_assistant has a reference as `from langchain_experimental.openai_assistant.base import OpenAIAssistantRunnable` should be `from langchain.agents.openai_assistant.base import OpenAIAssistantRunnable` This prevents building of the API Reference docs
Configuration menu - View commit details
-
Copy full SHA for 43dad6c - Browse repository at this point
Copy the full SHA 43dad6cView commit details -
Configuration menu - View commit details
-
Copy full SHA for f4c0e3c - Browse repository at this point
Copy the full SHA f4c0e3cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 790ed8b - Browse repository at this point
Copy the full SHA 790ed8bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 78a1f4b - Browse repository at this point
Copy the full SHA 78a1f4bView commit details
This comparison is taking too long to generate.
Unfortunately it looks like we canβt render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.0.337...v0.0.338