This repository provides a LangGraph-based agentic system that proposes closing stale GitHub issues, with a human-in-the-loop review step using Agent Inbox. It selects stale issues from a target repository, investigates with repository-aware tools, and then interrupts for you to approve, edit, respond, or ignore before it posts a closing comment and closes the issue. The agent uses Azure OpenAI for the LLM.
- Graph ID:
agent
(seelanggraph.json
) - Default target repo:
Azure-samples/azure-search-openai-demo
(configurable viaTARGET_REPO
)
Contents
- Getting started
- GitHub Codespaces
- VS Code Dev Containers
- Local environment
- GitHub authentication (required)
- Configuring Azure AI models
- Running the stale issue closer
- Agent Inbox setup
- Cost estimate
- Developer tasks
- Resources
-
Make sure the following are installed:
- Python 3.11+
- Git
- uv (for dependency management)
-
Clone the repository:
git clone https://github.com/pamelafox/stale-issue-closer-agent cd stale-issue-closer-agent
-
Create the virtual environment and install dependencies (this will create
.venv
anduv.lock
):uv sync
This project uses Azure OpenAI for the LLM. This repository includes IaC (Bicep) to provision an Azure OpenAI deployment and writes a ready-to-use .env
file.
-
Install the Azure Developer CLI (azd)
-
Sign in to Azure:
azd auth login
-
Provision Azure resources (this deploys Azure OpenAI and writes
.env
via a post-provision hook):azd provision
After provisioning, your
.env
will include values likeAZURE_TENANT_ID
,AZURE_OPENAI_ENDPOINT
,AZURE_OPENAI_CHAT_DEPLOYMENT
,AZURE_OPENAI_VERSION
,AZURE_OPENAI_CHAT_MODEL
This project requires a GitHub personal access token to call the GitHub GraphQL and REST APIs (for searching issues/code and closing issues). It does not use GitHub Models for the LLM calls.
-
In GitHub Developer Settings, create a personal access token with
repo
scope. -
Set
GITHUB_TOKEN
in your shell or in the.env
file:export GITHUB_TOKEN=your_personal_access_token
-
Set the target repository (optional, defaults to
Azure-samples/azure-search-openai-demo
):# Full name, e.g., owner/name. Default is Azure-samples/azure-search-openai-demo export TARGET_REPO=owner/name
This project uses Langsmith for tracing and for integration with Agent Inbox. To enable Langsmith tracing, set the following environment variables in your shell or in the .env
file:
LANGSMITH_TRACING="true"
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="<your_langsmith_api_key>"
LANGSMITH_PROJECT="<your_project_name>"
This project uses uv
and LangGraph’s dev server.
-
Ensure dependencies are installed:
uv sync
-
Start the LangGraph dev server (inside the uv-managed virtualenv):
uvx --from "langgraph-cli[inmem]" --with-editable . langgraph dev --allow-blocking
When it’s running, a browser tab should open to LangGraph Studio via LangSmith. If it doesn’t, open this URL: https://smith.langchain.com/studio/thread?baseUrl=http%3A%2F%2F127.0.0.1%3A2024
-
In LangGraph Studio, start a new run of the
agent
graph. The agent will select a stale issue fromTARGET_REPO
, investigate with tools, then interrupt for review.
This repository includes the Langchain Agent Inbox UI as a git submodule at agent-inbox
(tracking the ui-improve
branch of https://github.com/pamelafox/agent-inbox). Use it to review and act on issue proposals produced by the triager.
-
If you cloned this repo without
--recurse-submodules
, initialize/update the submodule first:git submodule update --init --recursive
-
Start the local Agent Inbox dev server:
cd agent-inbox yarn install # first time only (or when dependencies change) yarn dev
-
Navigate to the server running at http://localhost:3000
-
Configure the inbox:
- Add your LangSmith API key: Click the "Settings" button in the sidebar, and enter your LangSmith API key.
- Graph/Assistant ID:
agent
(matcheslanggraph.json
) - Deployment URL:
http://127.0.0.1:2024
-
When the triager makes a proposal, it should show up as a thread in the inbox. Accept or edit to let the triager continue; if approved it will apply the actions (comment / labels / close).
LLM usage varies per issue depending on the number of tool calls and the length of the issue and comments. With Azure OpenAI, cost depends on the model and tokens processed. See pricing: https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/
Common dev tasks are available via uv
and the included Makefile targets:
-
Run tests:
uv run -- python -m pytest
-
Lint / format:
uv run -- ruff check . uv run -- ruff format .
-
Type checking:
uv run -- mypy src
- Agent Inbox: https://github.com/langchain-ai/agent-inbox
- LangGraph docs: https://langchain-ai.github.io/langgraph/
- LangGraph Studio (via LangSmith): https://smith.langchain.com/
- LangGraph Studio (via LangSmith): https://smith.langchain.com/