diff --git a/11-integrating-with-function-calling/README.md b/11-integrating-with-function-calling/README.md index bb012498a..796afdf9f 100644 --- a/11-integrating-with-function-calling/README.md +++ b/11-integrating-with-function-calling/README.md @@ -16,7 +16,7 @@ This lesson will cover: ## Learning Goals -After completing this lesson you will be able to: +By the end of this lesson, you will be able to: - Explain the purpose of using function calling. - Setup Function Call using the Azure OpenAI Service. diff --git a/11-integrating-with-function-calling/aoai-assignment.ipynb b/11-integrating-with-function-calling/python/aoai-assignment.ipynb similarity index 99% rename from 11-integrating-with-function-calling/aoai-assignment.ipynb rename to 11-integrating-with-function-calling/python/aoai-assignment.ipynb index b7315e397..06e4ab798 100644 --- a/11-integrating-with-function-calling/aoai-assignment.ipynb +++ b/11-integrating-with-function-calling/python/aoai-assignment.ipynb @@ -246,7 +246,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "![Function Calling Flow Diagram](./images/Function-Flow.png?WT.mc_id=academic-105485-koreyst)" + "![Function Calling Flow Diagram](../images/Function-Flow.png?WT.mc_id=academic-105485-koreyst)" ] }, { @@ -290,7 +290,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "![Flow of a Function Call](./images/LLM-Flow.png?WT.mc_id=academic-105485-koreyst)" + "![Flow of a Function Call](../images/LLM-Flow.png?WT.mc_id=academic-105485-koreyst)" ] }, { diff --git a/11-integrating-with-function-calling/python/oai-assignment.ipynb b/11-integrating-with-function-calling/python/oai-assignment.ipynb new file mode 100644 index 000000000..c59c7eb65 --- /dev/null +++ b/11-integrating-with-function-calling/python/oai-assignment.ipynb @@ -0,0 +1,613 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction \n", + "\n", + "This lesson will cover: \n", + "- What is function calling and its use cases \n", + "- How to create a function call using OpenAI \n", + "- How to integrate a function call into an application \n", + "\n", + "## Learning Goals \n", + "\n", + "After completing this lesson you will know how to and understand: \n", + "\n", + "- The purpose of using function calling \n", + "- Setup Function Call using the Open AI Service \n", + "- Design effective function calls for your applications use case " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Understanding Function Calls \n", + "\n", + "For this lesson, we want to build a feature for our education startup that allows users to use a chatbot to find technical courses. We will recommend courses that fit their skill level, current role and technology of interest. \n", + "\n", + "To complete this we will use a combination of: \n", + " - `Open AI` to create a chat experience for the user\n", + " - `Microsoft Learn Catalog API` to help users find courses based on the request of the user \n", + " - `Function Calling` to take the user's query and send it to a function to make the API request. \n", + "\n", + "To get started, let's look at why we would want to use function calling in the first place: \n", + "\n", + "print(\"Messages in next request:\")\n", + "print(messages)\n", + "print()\n", + "\n", + "second_response = client.chat.completions.create(\n", + " messages=messages,\n", + " model=deployment,\n", + " function_call=\"auto\",\n", + " functions=functions,\n", + " temperature=0\n", + " ) # get a new response from GPT where it can see the function response\n", + "\n", + "\n", + "print(second_response.choices[0].message)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Why Function Calling \n", + "\n", + "If you have completed any other lesson in this course, you probably understand the power of using Large Language Models (LLMs). Hopefully you can also see some of their limitations as well. \n", + "\n", + "Function Calling is a feature of the Open AI Service to overcome to the following limitations: \n", + "1) Consistent response format \n", + "2) Ability to use data from other sources of an application in a chat context \n", + "\n", + "Before function calling, responses from an LLM were unstructured and inconsistent. Developers were required to write complex validation code to make sure they are able to handle each variation of a response. \n", + "\n", + "Users could not get answers like \"What is the current weather in Stockholm?\". This is because models were limited to the time the data was trained on. \n", + "\n", + "Let's look at the example below that illustrates this problem: \n", + "\n", + "Let's say we want to create a database of student data so we can suggest the right course to them. Below we have two descriptions of students that are very similar in the data they contain." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "student_1_description=\"Emily Johnson is a sophomore majoring in computer science at Duke University. She has a 3.7 GPA. Emily is an active member of the university's Chess Club and Debate Team. She hopes to pursue a career in software engineering after graduating.\"\n", + " \n", + "student_2_description = \"Michael Lee is a sophomore majoring in computer science at Stanford University. He has a 3.8 GPA. Michael is known for his programming skills and is an active member of the university's Robotics Club. He hopes to pursue a career in artificial intelligence after finshing his studies.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "We want to send this to an LLM to parse the data. This can later be used in our application to send this to an API or store it in a database. \n", + "\n", + "Let's create two identical prompts that we instruct the LLM on what information that we are interested in: " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We want to send this to an LLM to parse the parts that are important to our product. So we can create two identical prompts to instruct the LLM: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "prompt1 = f'''\n", + "Please extract the following information from the given text and return it as a JSON object:\n", + "\n", + "name\n", + "major\n", + "school\n", + "grades\n", + "club\n", + "\n", + "This is the body of text to extract the information from:\n", + "{student_1_description}\n", + "'''\n", + "\n", + "\n", + "prompt2 = f'''\n", + "Please extract the following information from the given text and return it as a JSON object:\n", + "\n", + "name\n", + "major\n", + "school\n", + "grades\n", + "club\n", + "\n", + "This is the body of text to extract the information from:\n", + "{student_2_description}\n", + "'''\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After creating these two prompts, we will send them to the LLM by using `openai.ChatCompletion`. We store the prompt in the `messages` variable and assign the role to `user`. This is to mimic a message from a user being written to a chatbot. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "ename": "", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[1;31mRunning cells with '/opt/homebrew/bin/python3' requires the ipykernel package.\n", + "\u001b[1;31mRun the following command to install 'ipykernel' into the Python environment. \n", + "\u001b[1;31mCommand: '/opt/homebrew/bin/python3 -m pip install ipykernel -U --user --force-reinstall'" + ] + } + ], + "source": [ + "import os\n", + "import json\n", + "from openai import OpenAI\n", + "from dotenv import load_dotenv\n", + "load_dotenv()\n", + "\n", + "client = OpenAI()\n", + "\n", + "deployment=\"gpt-3.5-turbo\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can send both requests to the LLM and examine the response we receive. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "openai_response1 = client.chat.completions.create(\n", + " model=deployment, \n", + " messages = [{'role': 'user', 'content': prompt1}]\n", + ")\n", + "openai_response1.choices[0].message.content " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "openai_response2 = client.chat.completions.create(\n", + " model=deployment, \n", + " messages = [{'role': 'user', 'content': prompt2}]\n", + ")\n", + "openai_response2.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Loading the response as a JSON object\n", + "json_response1 = json.loads(openai_response1.choices[0].message.content)\n", + "json_response1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Loading the response as a JSON object\n", + "json_response2 = json.loads(openai_response2.choices[0].message.content )\n", + "json_response2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Even though the prompts are the same and the descriptions are similar, we can get different formats of the `Grades` property. \n", + "\n", + "If you run the above cell multiple times, the format can be `3.7` or `3.7 GPA`. \n", + "\n", + "This is because the LLM takes unstructured data in the form of the written prompt and returns also unstructured data. We need to have a structured format so that we know what to expect when storing or using this data\n", + "\n", + "By using functional calling, we can make sure that we receive structured data back. When using function calling, the LLM does not actually call or run any functions. Instead, we create a structure for the LLM to follow for its responses. We then use those structured responses to know what function to run in our applications. \n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Function Calling Flow Diagram](../images/Function-Flow.png?WT.mc_id=academic-105485-koreyst)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can then take what is returned from the function and send this back to the LLM. The LLM will then respond using natural language to answer the user's query. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Use Cases for using function calls \n", + "\n", + "**Calling External Tools** \n", + "Chatbots are great at providing answers to questions from users. By using function calling, the chatbots can use messages from users to complete certain tasks. For example, a student can ask the chatbot to \"Send email to my instructor saying I need more assistance with this subject\". This can make a function call to `send_email(to: string, body: string)`\n", + "\n", + "\n", + "**Create API or Database Queries**\n", + "Users can find information using natural language that gets converted into a formatted query or API request. An example of this could be a teacher who requests \"Who are the students that completed the last assignment\" which could call a function named `get_completed(student_name: string, assignment: int, current_status: string)`\n", + "\n", + "\n", + "**Creating Structured Data**\n", + "Users can take a block of text or CSV and use the LLM to extract important information from it. For example, a student can convert a Wikipedia article about peace agreements to create AI flash cards. This can be done by using a function called `get_important_facts(agreement_name: string, date_signed: string, parties_involved: list)`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Creating Your First Function Call \n", + "\n", + "The process of creating a function call includes 3 main steps: \n", + "1. Calling the Chat Completions API with a list of your functions and a user message \n", + "2. Read the model's response to perform an action ie execute a function or API Call \n", + "3. Make another call to Chat Completions API with the response from your function to use that information to create a response to the user. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Flow of a Function Call](../images/LLM-Flow.png?WT.mc_id=academic-105485-koreyst)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Elements of a function call \n", + "\n", + "#### Users Input \n", + "\n", + "The first step is to create a user message. This can be dynamically assigned by taking the value of a text input or you can assign a value here. If this is your first time working with the Chat Completions API, we need to define the `role` and the `content` of the message. \n", + "\n", + "The `role` can be either `system` (creating rules), `assistant` (the model) or `user` (the end-user). For function calling, we will assign this as `user` and an example question. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "messages= [ {\"role\": \"user\", \"content\": \"Find me a good course for a beginner student to learn Azure.\"} ]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Creating functions. \n", + "\n", + "Next we will define a function and the parameters of that function. We will use just one function here called `search_courses` but you can create multiple functions.\n", + "\n", + "**Important** : Functions are included in the system message to the LLM and will be included in the amount of available tokens you have available. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "functions = [\n", + " {\n", + " \"name\":\"search_courses\",\n", + " \"description\":\"Retrieves courses from the search index based on the parameters provided\",\n", + " \"parameters\":{\n", + " \"type\":\"object\",\n", + " \"properties\":{\n", + " \"role\":{\n", + " \"type\":\"string\",\n", + " \"description\":\"The role of the learner (i.e. developer, data scientist, student, etc.)\"\n", + " },\n", + " \"product\":{\n", + " \"type\":\"string\",\n", + " \"description\":\"The product that the lesson is covering (i.e. Azure, Power BI, etc.)\"\n", + " },\n", + " \"level\":{\n", + " \"type\":\"string\",\n", + " \"description\":\"The level of experience the learner has prior to taking the course (i.e. beginner, intermediate, advanced)\"\n", + " }\n", + " },\n", + " \"required\":[\n", + " \"role\"\n", + " ]\n", + " }\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Definitions** \n", + "\n", + "`name` - The name of the function that we want to have called. \n", + "\n", + "`description` - This is the description of how the function works. Here it's important to be specific and clear \n", + "\n", + "`parameters` - A list of values and format that you want the model to produce in its response \n", + "\n", + "\n", + "`type` - The data type of the properties will be stored in \n", + "\n", + "`properties` - List of the specific values that the model will use for its response \n", + "\n", + "\n", + "`name` - the name of the property that the model will use in its formatted response \n", + "\n", + "`type` - The data type of the this property \n", + "\n", + "`description` - Description of the specific property \n", + "\n", + "\n", + "**Optional**\n", + "\n", + "`required` - required property for the function call to be completed \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Making the function call \n", + "After defining a function, we now need to include it in the call to the Chat Completion API. We do this by adding `functions` to the request. In this case `functions=functions`. \n", + "\n", + "There is also an option to set `function_call` to `auto`. This means we will let the LLM decide which function should be called based on the user message rather than assigning it ourselves." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response = client.chat.completions.create(model=deployment, \n", + " messages=messages,\n", + " functions=functions, \n", + " function_call=\"auto\") \n", + "\n", + "print(response.choices[0].message)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's let's look at the response and see how it is formatted: \n", + "\n", + "{\n", + " \"role\": \"assistant\",\n", + " \"function_call\": {\n", + " \"name\": \"search_courses\",\n", + " \"arguments\": \"{\\n \\\"role\\\": \\\"student\\\",\\n \\\"product\\\": \\\"Azure\\\",\\n \\\"level\\\": \\\"beginner\\\"\\n}\"\n", + " }\n", + "}\n", + "\n", + "You can see that the name of the function is called and from the user message, the LLM was able to find the data to fit the arguments of the function. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3.Integrating Function Calls into an Application. \n", + "\n", + "\n", + "After we have tested the formatted response from the LLM, now we can integrate this into an application. \n", + "\n", + "### Managing the flow \n", + "\n", + "To integrate this into our application, let's take the following steps: \n", + "\n", + "First, let's make the call to the Open AI services and store the message in a variable called `response_message`. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response_message = response.choices[0].message" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we will define the function that will call the Microsoft Learn API to get a list of courses: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "\n", + "def search_courses(role, product, level):\n", + " url = \"https://learn.microsoft.com/api/catalog/\"\n", + " params = {\n", + " \"role\": role,\n", + " \"product\": product,\n", + " \"level\": level\n", + " }\n", + " response = requests.get(url, params=params)\n", + " modules = response.json()[\"modules\"]\n", + " results = []\n", + " for module in modules[:5]:\n", + " title = module[\"title\"]\n", + " url = module[\"url\"]\n", + " results.append({\"title\": title, \"url\": url})\n", + " return str(results)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As a best practice, we will then see if the model wants to call a function. After that, we will create one of the available functions and match it to the function that is being called. \n", + "We will then take the arguments of the function and map them to arguments of from the LLM.\n", + "\n", + "Lastly, we will append the function call message and the values that were returned by the `search_courses` message. This gives the LLM all the information it needs to\n", + "respond to the user using natural language. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Check if the model wants to call a function\n", + "if response_message.function_call.name:\n", + " print(\"Recommended Function call:\")\n", + " print(response_message.function_call.name)\n", + " print()\n", + "\n", + " # Call the function. \n", + " function_name = response_message.function_call.name\n", + "\n", + " available_functions = {\n", + " \"search_courses\": search_courses,\n", + " }\n", + " function_to_call = available_functions[function_name] \n", + "\n", + " function_args = json.loads(response_message.function_call.arguments)\n", + " function_response = function_to_call(**function_args)\n", + "\n", + " print(\"Output of function call:\")\n", + " print(function_response)\n", + " print(type(function_response))\n", + "\n", + "\n", + " # Add the assistant response and function response to the messages\n", + " messages.append( # adding assistant response to messages\n", + " {\n", + " \"role\": response_message.role,\n", + " \"function_call\": {\n", + " \"name\": function_name,\n", + " \"arguments\": response_message.function_call.arguments,\n", + " },\n", + " \"content\": None\n", + " }\n", + " )\n", + " messages.append( # adding function response to messages\n", + " {\n", + " \"role\": \"function\",\n", + " \"name\": function_name,\n", + " \"content\":function_response,\n", + " }\n", + " )\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we will send the updated message to the LLM so we can receive a natural language response instead of an API JSON formatted response. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"Messages in next request:\")\n", + "print(messages)\n", + "print()\n", + "\n", + "second_response = client.chat.completions.create(\n", + " messages=messages,\n", + " model=deployment,\n", + " function_call=\"auto\",\n", + " functions=functions,\n", + " temperature=0\n", + " ) # get a new response from GPT where it can see the function response\n", + "\n", + "\n", + "print(second_response.choices[0].message)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code Challenge \n", + "\n", + "Great work! To continue your learning of Open AI Function Calling you can build: https://learn.microsoft.com/training/support/catalog-api-developer-reference?WT.mc_id=academic-105485-koreyst \n", + " - More parameters of the function that might help learners find more courses. You can find the available API parameters here: \n", + " - Create another function call that takes more information from the learner like their native language \n", + " - Create error handling when the function call and/or API call does not return any suitable courses " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.2" + }, + "orig_nbformat": 4 + }, + "nbformat": 4, + "nbformat_minor": 2 +}