This repository provides an AI-powered agent for managing personal LinkedIn accounts. The agent uses Pydantic AI for LLM-based decisions and Playwright for browser automation. It can process LinkedIn invitations, deciding whether to accept or ignore them based on customizable criteria. See the demo video to see the agent in action.
- Getting started
- Configuring GitHub Models
- Configuring Azure AI models
- Running the invitation manager
- Cost estimate
- Resources
You have a few options for getting started with this repository. The quickest way to get started is GitHub Codespaces, since it will setup everything for you, but you can also set it up locally.
You can run this repository virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:
-
Open the repository (this may take several minutes):
-
Open a terminal window
-
Continue with the steps to run the examples
A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed)
-
Open the project:
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
-
Continue with the steps to run the examples
-
Make sure the following tools are installed:
- Python 3.10+
- Git
-
Clone the repository:
git clone https://github.com/Azure-Samples/python-ai-agent-frameworks-demos cd python-ai-agents-demos
-
Set up a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install the requirements:
pip install -r requirements.txt
If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your GITHUB_TOKEN
is already configured in the Codespaces environment.
If you want to run the scripts locally, you need to set up the GITHUB_TOKEN
environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps:
-
Go to your GitHub account settings.
-
Click on "Developer settings" in the left sidebar.
-
Click on "Personal access tokens" in the left sidebar.
-
Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference.
-
Click on "Generate new token".
-
Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes.
-
Click on "Generate token".
-
Copy the generated token.
-
Set the
GITHUB_TOKEN
environment variable in your terminal or IDE:export GITHUB_TOKEN=your_personal_access_token
-
Optionally, you can use a model other than "gpt-4o" by setting the
GITHUB_MODEL
environment variable. Use a model that supports function calling, such as:gpt-4o
,gpt-4o-mini
,o3-mini
,AI21-Jamba-1.5-Large
,AI21-Jamba-1.5-Mini
,Codestral-2501
,Cohere-command-r
,Ministral-3B
,Mistral-Large-2411
,Mistral-Nemo
,Mistral-small
You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure OpenAI instead, you need to provision the Azure AI resources, which will incur costs.
This project includes infrastructure as code (IaC) to provision an Azure OpenAI deployment of "gpt-4o". The IaC is defined in the infra
directory and uses the Azure Developer CLI to provision the resources.
-
Make sure the Azure Developer CLI (azd) is installed.
-
Login to Azure:
azd auth login
For GitHub Codespaces users, if the previous command fails, try:
azd auth login --use-device-code
-
Provision the OpenAI account:
azd provision
It will prompt you to provide an
azd
environment name (like "agents-demos"), select a subscription from your Azure account, and select a location. Then it will provision the resources in your account. -
Once the resources are provisioned, you should now see a local
.env
file with all the environment variables needed to run the scripts. -
To delete the resources, run:
azd down
You can run the LinkedIn agent by executing the invitations_manager.py
script. The agent will process LinkedIn invitations based on the decision logic defined in the code.
This project includes evaluations using Pydantic-AI evals to measure the agent's performance. You can run the evaluations by executing the evals.py
script.
On average, each LinkedIn invitation processed by the agent requires approximately 200 tokens. If the agent decides it needs to open the full profile page to gather more information, it requires an additional 400 tokens on average.
If you use GitHub Models, the cost is free as long as usage remains under the rate limits. You can switch models to a model with a lower rate limit by setting the GITHUB_MODEL
environment variable in .env
to a different model name.
If you use Azure OpenAI, the cost depends on the model and the number of tokens processed. You can find the pricing details on the Azure OpenAI pricing page.