Skip to content

This guide documents the correct structure and format of the system prompt used in Hugging Face's SmolAgents framework.

yuka-with-data/system-prompt-guide

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 

Repository files navigation

SmolAgent System Prompt Guide

This guide documents the correct structure and format of the system_prompt used in Hugging Face's SmolAgents framework. As of SmolAgents v.1.18.0, the following structure is required for a system prompt to function correctly without raising KeyError or similar issues. The current version of documentation does not include guidance on how to customize the smolagents system prompt template and its components, such as system_prompt, planning, managed_agent, etc, or how these interact during the agent's reasoning and tool usage process.

❓ What is System Prompt?

A system prompt is a predefined instruction or template that sets the behavior, tone, and goals for an AI agent or language model. It acts as a foundational message to guide the model's reasoning, planning and response generation throughout its interaction with the user. In Agent frameworks like SmolAgents, the system prompt helps:

  • Define how the agent interprets tasks
  • Plan its thought and action process
  • Structure intermediate and final outputs

In the smolagents framework, the system prompt plays a core role in the architecture of an agent, which typically includes:

  • A language model
  • One or more tools
  • The system prompt

While a custom system_prompt is not technically required when initializing a CodeAgent class constructor (i.e., you can omit it as a constructor argument), the agent will still use a default system prompt provided by the framework. However, the customizing the system prompt could be crucial if you want to control the agent's reasoning style, tool usage behavior, or output formatting. Without a tailored prompt, the agent may behave generically or fail to handle domain-specific or multi-step tasks effectively.

In practice, customizing the system prompt and its components is one of the most powerful ways to shape the behavior of your agent.

📌 Why This Matters

While the official documentation only notes that your prompt must include {{tool_descriptions}} and {{managed_agents_description}}, it does not provide the full YAML structure expected by the PromptTemplates class internally. This guide serves as a definitive reference to help you build or modify custom system prompts reliably.

Required System Prompt Structure

system_prompt:
planning:
    initial_plan: "..."
    update_plan_pre_messages: "..."
    update_plan_post_messages: "..."

managed_agents:
    task: "..."
    report: "..."

final_answer:
    pre_messages: "..."
    post_messages: "..."

🧱Components of the System Prompt Template

  1. system_prompt A high-level description of the AI's identity, domain and tone.
system_prompt: |-
    You are a friendly and expert AI assistant helping users navigate the public transportation system.

This sets the global role and tone of the agent.

  1. planning This is the core of the reasoning pipeline. It has three sub-sections:

    a. initial_plan: defines how tha agent should start planning based on the user's input.

    b. update_plan_pre_messages: guides how the plan should be revised when the user sends a new message.

    c. update_plan_post_messages: defines what to do after executing a tool or receiving new information.

  2. managed_agents Defines sub-agents the main agent can delegate tasks to.

managed_agents:
  planner_agent:
    task: "Plan a multi-leg train journey with optional sightseeing stops."
    report: "Summary of the route and any sightseeing suggestions."

These are required when using multi-agent delegation.

  1. final_answer Specifies how to format and return the final answer to the users.

    a. pre_messages: instructs the model to wrap the response using a specific function and format.

    b. post_messages: used for a final check before showing the message to the user.

Source Code

This structure is defined in the PromptTemplates class in the SmolAgents source code.

🧠 How It Works in Practice

The smolagents framework implements a structured Reasoning + Acting (ReAct) loop, tailored specifically to the defined system prompt template. According to the smolagent ReAct guide, the agent progresses through a well-defined cycle:

  1. Receive User's query➡️Routed into initial_plan to determine intent and structure the next step.
  2. Reasoning Step➡️Decides whether to use a tool or internal knowledge. If this is a follow-up or continuation, the plan is modified via update_plan_pre_messages.
  3. Act (Code Execution)➡️Calls tools.
  4. Observe➡️The tool's return is saved as the Observation.
  5. Post-Processing Plan➡️The observation is passed through update_plan_post_messages which determines whether to finalize or continue planning.
  6. Response➡️If ready, the response is composed, and validated with post_messages.

⚠️Why Hugging Face Discourages Prompt Customization (Official Stance)

According to the official tutorial on building good agents, directly modifying the system prompt template is generally not advised unless you know what you are doing.

💡Interpretation

Smolagents uses structured prompt templating as a core part of its reasoning loop. Unlike freedom LLM prompting, Smolagents expects specific keys and formats to be present at runtime. Even minor mistakes like misnaming update_plan_post_messages can lead to:

  • KeyError or runtime exceptions
  • Broken planning behavior
  • Tool usage failures
  • Final answers never being triggered

✅When You Should Cutomize

That said, customizing the system prompt is sometimes essential when:

  • You need to define a specific tone or agent role
  • You are building agents for a non-default domain (e.g., legal, medical, transportation, etc)
  • You understand the Prompt Template structure well and can validate changes carefully

🛠️ How to Use a Custom System Prompt in Your Agent

To apply your own custom system prompt in a smolagent framework, you can define your own system prompt and store it externally in a YAML file. Here is how to do it step by step:

  1. Create a prompts.yaml file

Define your custom prompt template with all necessary components:

system_prompt: |
  You are a helpful and friendly assistant that supports users with...

planning:
  initial_plan: |
    First, analyze the user intent...
  update_plan_pre_messages: |
    Check if user provided new parameters...
  update_plan_post_messages: |
    Decide next action after observing tool output...

managed_agents:
  agent_name:
    task: "Do something specific..."
    report: "Summarize the results in plain language."

final_answer:
  pre_messages: |
    Wrap final result like this:
    ```py
    final_answer("Your answer here")
    ```<end_code>
  post_messages: |
    Final check before showing to user.
  1. Load the system prompt from the YAML file

Use Python's yaml library to load the template:

import yaml

with open("prompts.yaml", 'r', encoding='utf-8') as stream:
    prompt_templates = yaml.safe_laod(stream)

This loads your full stream prompt into a dictionary compatible with prompt_templates argument.

  1. Instantiate your smolagents

Pass the loaded prompt when initializing your agent:

from smolagents import CodeAgent

# Example
agent = CodeAgent(
    tools=[
        get_train_schedule_from_search,
        use_duckduckgo_search,
        final_answer
    ],
    model=model,
    max_steps=6,
    additional_authorized_imports=["datetime"],
    add_base_tools=True,
    prompt_templates=prompt_templates,  # ✅ Your custom system prompt
    max_print_outputs_length=1000
)

Now your agent is fully guided by the structure, behavior, and format you defined in prompts.yaml.

🧩System Prompt Template in Other Frameworks

Feature Smolagent LangChain OpenAI
Prompt Structure Modular YAML w/plannig stages Python classes + tool registry JSON-based function
Planning Logic Explicit multi-step planning Optional None(Model decides tool use)
Tool Use Format Python code block w/function call Python Python function call
Multi-Agent Native Manual orchestration Supported via Agents SDK
Tool Output Interpretation Defined system prompt Via Callbacks or chaining Model or user processes JSON responses
Final Response Format final answer function call Plaintext or JSON Plaintext or JSON

By contrast, SmolAgents organizes the system prompt in a YAML tree structure with modular sections for planning, tasking, and output formatting.

📂 Example File

📚 Resources

✍️ Notes

This doc was written based on hands-on experience customizing smolagents with transportation and sightseeing planning agent.

About

This guide documents the correct structure and format of the system prompt used in Hugging Face's SmolAgents framework.

Topics

Resources

Stars

Watchers

Forks