The Agent Stack Showcase Agent is a research prototype built with the BeeAI Framework and Agent Stack SDK.
It demonstrates how to combine tool orchestration, memory, file analysis, and platform extensions into a general-purpose conversational assistant. The agent can handle chat, process uploaded files, search the web, and provide structured outputs with citations and trajectory logs for debugging and UI replay.
-
Multi-turn chat with persistent per-session memory (
UnconstrainedMemory) -
Tool orchestration via the experimental
RequirementAgent, with rules like:ThinkTool— invoked first and after every tool for reasoningDuckDuckGoSearchTool— used up to 2 times per query, skipped for casual greetings- File processing — supports PDF, CSV, JSON, and plain text uploads
-
Citation extraction — converts
[text](url)markdown links into structured citation objects -
Trajectory tracking — logs each reasoning step, tool invocation, and output for replay/debugging
-
Configurable settings — users can toggle thinking/search behaviors and select response style (concise, standard, detailed)
-
Basic error handling — user-facing messages and detailed logs
-
Install Agent Stack Follow the Quickstart Guide to install and set up Agent Stack. This is required before running the agent.
-
Start the server Once the platform is installed, launch the agent server:
uv run server
The server runs on the configured
HOSTandPORTenvironment variables (defaults:127.0.0.1:8000).
agentstack_showcase(...)— Main async entrypoint handling chat, file uploads, memory, and tool orchestrationRequirementAgent(...)— Experimental agent that enforcesConditionalRequirementrules for tool usageThinkTool— Provides structured reasoning and analysisDuckDuckGoSearchTool— Performs real-time web search (with constraints)extract_citations(...)— Converts markdown links into structured citation objectsis_casual(...)— Skips tool invocation for short greetings or casual inputget_memory(...)— Provides per-sessionUnconstrainedMemoryrun()— Starts the Agent Stack server
- CitationExtensionServer — renders citations into structured previews
- TrajectoryExtensionServer — captures reasoning/tool usage for UI replay & debugging
- LLMServiceExtensionServer — manages LLM fulfillment through Agent Stack
- SettingsExtensionServer — allows user configuration of agent behaviors
User input:
What are the latest advancements in AI research from 2025?
Agent flow:
ThinkToolinvoked for reasoningDuckDuckGoSearchToolcalled (unless skipped for casual input)- Response returned with proper
[label](url)citations - Citations extracted and sent to UI
- Steps logged in trajectory extension
- Conversation context persisted for future turns
- If a file is uploaded, it’s analyzed and summarized
The agent supports both chat and file analysis, such as:
- "What are the latest advancements in AI research from 2025?"
- "Can you help me write a Slack announcement for [topic/team update]?"
- "Analyze this CSV file and tell me the key trends."
- "Summarize the main points from this PDF document."