Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
238 changes: 238 additions & 0 deletions docs/source/notebooks/getting_started.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,238 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "988a34b4-f94d-4704-98c6-31bf63f9adde",
"metadata": {},
"source": [
"# Getting Started\n",
"\n",
"LLMs are powerful but can be hard to steer and prone to errors when deployed. At the same time, new models and techniques are being developed all the time. We want to make it easy for you to experiment with different techniques, understand their tradeoffs, and make informed decisions for your specific use case.\n",
"\n",
"The package is organized to make it easy to test architectures around higher level \"functionality\". This includes:\n",
"\n",
"- Retrieval-augmented generation\n",
"- Agent tool use\n",
"- Extraction\n",
"\n",
"They all share a same \"Task\" interface to provide some abstractions to create and evaluate different models in-context, including different \"environments\" and shared evaluators.\n",
"\n",
"This notebook shows how to get started with the package. For any given task, the main steps are:\n",
"\n",
"1. Install the package\n",
"2. Select a task\n",
"3. Download the dataset\n",
"4. Define your architecture\n",
"5. Run the evaluation\n",
"\n",
"## Setup\n",
"\n",
"The evaluations use [LangSmith](https://smith.langchain.com) (see: [docs](https://docs.smith.langchain.com/)) to host the benchmark datasets and track your architecture's traces and evaluation metrics. \n",
"\n",
"Create a LangSmith account and set your [API key](https://smith.langchain.com/settings) below."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b504eaf4-0712-4ae5-a28c-f8def9a1e566",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\n",
" \"LANGCHAIN_API_KEY\"\n",
"] = \"sk-...\" # Get from https://smith.langchain.com/settings"
]
},
{
"cell_type": "markdown",
"id": "3718844e-89b5-45dc-829d-8b8e79adfe72",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"Next, install the required packages."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0757905b-7761-4dca-b98b-2d8dc136eb94",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# %pip install -U --quiet langchain_benchmarks langchain langsmith"
]
},
{
"cell_type": "markdown",
"id": "d8203aca-7e62-415f-a4d5-deccca22f937",
"metadata": {},
"source": [
"## Select a task\n",
"\n",
"Each benchmark has a corresponding description, dataset, and other \"environment\" information. You can view the available tasks by checking the registry."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c516c725-c968-422b-aedf-e360d4f7774c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/html": [
"<table>\n",
"<thead>\n",
"<tr><th>Name </th><th>Type </th><th>Dataset ID </th><th>Description </th></tr>\n",
"</thead>\n",
"<tbody>\n",
"<tr><td>Tool Usage - Typewriter (1 func)</td><td>ToolUsageTask </td><td>placeholder </td><td>Environment with a single function that accepts a single letter as input, and &quot;prints&quot; it on a piece of paper.\n",
"\n",
"The objective of this task is to evaluate the ability to use the provided tools to repeat a given input string.\n",
"\n",
"For example, if the string is &#x27;abc&#x27;, the tools &#x27;a&#x27;, &#x27;b&#x27;, and &#x27;c&#x27; must be invoked in that order.\n",
"\n",
"The dataset includes examples of varying difficulty. The difficulty is measured by the length of the string. </td></tr>\n",
"<tr><td>Tool Usage - Typewriter </td><td>ToolUsageTask </td><td>placeholder </td><td>Environment with 26 functions each representing a letter of the alphabet.\n",
"\n",
"In this variation of the typewriter task, there are 26 parameterless functions, where each function represents a letter of the alphabet (instead of a single function that takes a letter as an argument).\n",
"\n",
"The object is to evaluate the ability of use the functions to repeat the given string.\n",
"\n",
"For example, if the string is &#x27;abc&#x27;, the tools &#x27;a&#x27;, &#x27;b&#x27;, and &#x27;c&#x27; must be invoked in that order.\n",
"\n",
"The dataset includes examples of varying difficulty. The difficulty is measured by the length of the string. </td></tr>\n",
"<tr><td>Tool Usage - Relational Data </td><td>ToolUsageTask </td><td>e95d45da-aaa3-44b3-ba2b-7c15ff6e46f5 </td><td>Environment with fake data about users and their locations and favorite foods.\n",
"\n",
"The environment provides a set of tools that can be used to query the data.\n",
"\n",
"The objective of this task is to evaluate the ability to use the provided tools to answer questions about relational data.\n",
"\n",
"The dataset contains 21 examples of varying difficulty. The difficulty is measured by the number of tools that need to be used to answer the question.\n",
"\n",
"Each example is composed of a question, a reference answer, and information about the sequence in which tools should be used to answer the question.\n",
"\n",
"Success is measured by the ability to answer the question correctly, and efficiently. </td></tr>\n",
"<tr><td>Multiverse Math </td><td>ToolUsageTask </td><td>placeholder </td><td>An environment that contains a few basic math operations, but with altered results.\n",
"\n",
"For example, multiplication of 5*3 will be re-interpreted as 5*3*1.1. The basic operations retain some basic properties, such as commutativity, associativity, and distributivity; however, the results are different than expected.\n",
"\n",
"The objective of this task is to evaluate the ability to use the provided tools to solve simple math questions and ignore any innate knowledge about math. </td></tr>\n",
"<tr><td>Email Extraction </td><td>ExtractionTask</td><td>https://smith.langchain.com/public/36bdfe7d-3cd1-4b36-b957-d12d95810a2b/d</td><td>A dataset of 42 real emails deduped from a spam folder, with semantic HTML tags removed, as well as a script for initial extraction and formatting of other emails from an arbitrary .mbox file like the one exported by Gmail.\n",
"\n",
"Some additional cleanup of the data was done by hand after the initial pass.\n",
"\n",
"See https://github.com/jacoblee93/oss-model-extraction-evals. </td></tr>\n",
"<tr><td>LangChain Docs Q&amp;A </td><td>RetrievalTask </td><td>452ccafc-18e1-4314-885b-edd735f17b9d </td><td>Questions and answers based on a snapshot of the LangChain python docs.\n",
"\n",
"The environment provides the documents and the retriever information.\n",
"\n",
"Each example is composed of a question and reference answer.\n",
"\n",
"Success is measured based on the accuracy of the answer relative to the reference answer.\n",
"We also measure the faithfulness of the model&#x27;s response relative to the retrieved documents (if any). </td></tr>\n",
"<tr><td>Semi-structured Earnings </td><td>RetrievalTask </td><td>c47d9617-ab99-4d6e-a6e6-92b8daf85a7d </td><td>Questions and answers based on PDFs containing tables and charts.\n",
"\n",
"The task provides the raw documents as well as factory methods to easily index them\n",
"and create a retriever.\n",
"\n",
"Each example is composed of a question and reference answer.\n",
"\n",
"Success is measured based on the accuracy of the answer relative to the reference answer.\n",
"We also measure the faithfulness of the model&#x27;s response relative to the retrieved documents (if any). </td></tr>\n",
"</tbody>\n",
"</table>"
],
"text/plain": [
"Registry(tasks=[ToolUsageTask(name='Tool Usage - Typewriter (1 func)', dataset_id='placeholder', description='Environment with a single function that accepts a single letter as input, and \"prints\" it on a piece of paper.\\n\\nThe objective of this task is to evaluate the ability to use the provided tools to repeat a given input string.\\n\\nFor example, if the string is \\'abc\\', the tools \\'a\\', \\'b\\', and \\'c\\' must be invoked in that order.\\n\\nThe dataset includes examples of varying difficulty. The difficulty is measured by the length of the string.\\n', create_environment=<function get_environment at 0x137749da0>, instructions=\"Repeat the given string by using the provided tools. Do not write anything else or provide any explanations. For example, if the string is 'abc', you must invoke the tools 'a', 'b', and 'c' in that order. Please invoke the function with a single letter at a time.\"), ToolUsageTask(name='Tool Usage - Typewriter', dataset_id='placeholder', description=\"Environment with 26 functions each representing a letter of the alphabet.\\n\\nIn this variation of the typewriter task, there are 26 parameterless functions, where each function represents a letter of the alphabet (instead of a single function that takes a letter as an argument).\\n\\nThe object is to evaluate the ability of use the functions to repeat the given string.\\n\\nFor example, if the string is 'abc', the tools 'a', 'b', and 'c' must be invoked in that order.\\n\\nThe dataset includes examples of varying difficulty. The difficulty is measured by the length of the string.\\n\", create_environment=<function get_environment at 0x13774a160>, instructions=\"Repeat the given string by using the provided tools. Do not write anything else or provide any explanations. For example, if the string is 'abc', you must invoke the tools 'a', 'b', and 'c' in that order. Please invoke the functions without any arguments.\"), ToolUsageTask(name='Tool Usage - Relational Data', dataset_id='e95d45da-aaa3-44b3-ba2b-7c15ff6e46f5', description='Environment with fake data about users and their locations and favorite foods.\\n\\nThe environment provides a set of tools that can be used to query the data.\\n\\nThe objective of this task is to evaluate the ability to use the provided tools to answer questions about relational data.\\n\\nThe dataset contains 21 examples of varying difficulty. The difficulty is measured by the number of tools that need to be used to answer the question.\\n\\nEach example is composed of a question, a reference answer, and information about the sequence in which tools should be used to answer the question.\\n\\nSuccess is measured by the ability to answer the question correctly, and efficiently.\\n', create_environment=<function get_environment at 0x1377498a0>, instructions=\"Please answer the user's question by using the tools provided. Do not guess the answer. Keep in mind that entities like users,foods and locations have both a name and an ID, which are not the same.\"), ToolUsageTask(name='Multiverse Math', dataset_id='placeholder', description='An environment that contains a few basic math operations, but with altered results.\\n\\nFor example, multiplication of 5*3 will be re-interpreted as 5*3*1.1. The basic operations retain some basic properties, such as commutativity, associativity, and distributivity; however, the results are different than expected.\\n\\nThe objective of this task is to evaluate the ability to use the provided tools to solve simple math questions and ignore any innate knowledge about math.\\n', create_environment=<function get_environment at 0x137749260>, instructions='You are requested to solve math questions in an alternate mathematical universe. The rules of association, commutativity, and distributivity still apply, but the operations have been altered to yield different results than expected. Solve the given math questions using the provided tools. Do not guess the answer.'), ExtractionTask(name='Email Extraction', dataset_id='https://smith.langchain.com/public/36bdfe7d-3cd1-4b36-b957-d12d95810a2b/d', description='A dataset of 42 real emails deduped from a spam folder, with semantic HTML tags removed, as well as a script for initial extraction and formatting of other emails from an arbitrary .mbox file like the one exported by Gmail.\\n\\nSome additional cleanup of the data was done by hand after the initial pass.\\n\\nSee https://github.com/jacoblee93/oss-model-extraction-evals.\\n ', schema=<class 'langchain_benchmarks.extraction.tasks.email_task.Email'>, instructions=ChatPromptTemplate(input_variables=['email'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are an expert researcher.')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['email'], template='What can you tell me about the following email? Make sure to extract the question in the correct format. Here is the email:\\n ```\\n{email}\\n```'))])), RetrievalTask(name='LangChain Docs Q&A', dataset_id='452ccafc-18e1-4314-885b-edd735f17b9d', description=\"Questions and answers based on a snapshot of the LangChain python docs.\\n\\nThe environment provides the documents and the retriever information.\\n\\nEach example is composed of a question and reference answer.\\n\\nSuccess is measured based on the accuracy of the answer relative to the reference answer.\\nWe also measure the faithfulness of the model's response relative to the retrieved documents (if any).\\n\", retriever_factories={'basic': <function _chroma_retriever_factory at 0x137748900>, 'parent-doc': <function _chroma_parent_document_retriever_factory at 0x1377489a0>, 'hyde': <function _chroma_hyde_retriever_factory at 0x137748a40>}, architecture_factories={'conversational-retrieval-qa': <function default_response_chain at 0x133e11940>}, get_docs=<function load_cached_docs at 0x133e11260>), RetrievalTask(name='Semi-structured Earnings', dataset_id='c47d9617-ab99-4d6e-a6e6-92b8daf85a7d', description=\"Questions and answers based on PDFs containing tables and charts.\\n\\nThe task provides the raw documents as well as factory methods to easily index them\\nand create a retriever.\\n\\nEach example is composed of a question and reference answer.\\n\\nSuccess is measured based on the accuracy of the answer relative to the reference answer.\\nWe also measure the faithfulness of the model's response relative to the retrieved documents (if any).\\n\", retriever_factories={'basic': <function _chroma_retriever_factory at 0x137748fe0>, 'parent-doc': <function _chroma_parent_document_retriever_factory at 0x137749080>, 'hyde': <function _chroma_hyde_retriever_factory at 0x137749120>}, architecture_factories={}, get_docs=<function load_docs at 0x137748f40>)])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_benchmarks import registry\n",
"\n",
"registry"
]
},
{
"cell_type": "markdown",
"id": "1f7399c3-857b-4b20-a858-db6c802347b1",
"metadata": {},
"source": [
"## Download the dataset\n",
"\n",
"Each benchmark task has a corresponding dataset. To run evals on the specified benchmark, you can use our download function. For more details on working with datasets within the LangChain Benchmarks package, check out the [datasets](./datasets.ipynb) notebook."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "a5f0e42e-4b3d-4700-9607-c4c00d176c1c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_benchmarks import clone_public_dataset\n",
"\n",
"task = registry[\"Tool Usage - Relational Data\"]\n",
"\n",
"clone_public_dataset(task.dataset_id)"
]
},
{
"cell_type": "markdown",
"id": "0c0533b0-936e-434e-a771-41a5768e720a",
"metadata": {},
"source": [
"## Define architecture and evaluate\n",
"\n",
"After fetching the dataset, you can create your architecture and evaluate it using the task's \n",
"evaluation parameters. This differs by task. For more information, check the examples for your task."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6fd3c480-ce0d-4624-8cf7-bc43546f74f4",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading