Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jul 15, 2025

Adds support for AutoGen v0.4 while maintaining full backward compatibility with v0.2

Changes

  • ✅ Add optional dependency autogen-v4 with autogen-agentchat>=0.4.0, autogen-ext[openai]>=0.4.0, autogen-core>=0.4.0
  • ✅ Implement async _run_autogen_v4() method with RoundRobinGroupChat for parallel multi-agent execution
  • ✅ Add AUTOGEN_VERSION environment variable for version control (auto/v0.2/v0.4)
  • ✅ Maintain full backward compatibility with existing v0.2 implementation
  • ✅ Auto-detect available versions with intelligent fallback logic
  • ✅ Simplify tool integration for v0.4 (single AssistantAgent vs dual-agent pattern)
  • ✅ Add proper termination conditions and error handling for v0.4

Installation

  • v0.2: pip install "praisonai[autogen]" (existing)
  • v0.4: pip install "praisonai[autogen-v4]" (new)

Version Control

Set AUTOGEN_VERSION environment variable:

  • auto (default): Prefers v0.4 if available, fallback to v0.2
  • v0.2: Force use of AutoGen v0.2
  • v0.4: Force use of AutoGen v0.4

Testing

  • ✅ Version selection logic tested for all scenarios
  • ✅ Dependency structure validated
  • ✅ Framework detection handles missing dependencies gracefully
  • ✅ No breaking changes to existing functionality

Resolves #935

Docs below

Docs here

Migration Guide for v0.2 to v0.4
This is a migration guide for users of the v0.2.* versions of autogen-agentchat to the v0.4 version, which introduces a new set of APIs and features. The v0.4 version contains breaking changes. Please read this guide carefully. We still maintain the v0.2 version in the 0.2 branch; however, we highly recommend you upgrade to the v0.4 version.

Note

We no longer have admin access to the pyautogen PyPI package, and the releases from that package are no longer from Microsoft since version 0.2.34. To continue use the v0.2 version of AutoGen, install it using autogen-agentchat~=0.2. Please read our clarification statement regarding forks.

What is v0.4?
Since the release of AutoGen in 2023, we have intensively listened to our community and users from small startups and large enterprises, gathering much feedback. Based on that feedback, we built AutoGen v0.4, a from-the-ground-up rewrite adopting an asynchronous, event-driven architecture to address issues such as observability, flexibility, interactive control, and scale.

The v0.4 API is layered: the Core API is the foundation layer offering a scalable, event-driven actor framework for creating agentic workflows; the AgentChat API is built on Core, offering a task-driven, high-level framework for building interactive agentic applications. It is a replacement for AutoGen v0.2.

Most of this guide focuses on v0.4’s AgentChat API; however, you can also build your own high-level framework using just the Core API.

New to AutoGen?
Jump straight to the AgentChat Tutorial to get started with v0.4.

What’s in this guide?
We provide a detailed guide on how to migrate your existing codebase from v0.2 to v0.4.

See each feature below for detailed information on how to migrate.

Migration Guide for v0.2 to v0.4

What is v0.4?

New to AutoGen?

What’s in this guide?

Model Client

Use component config

Use model client class directly

Model Client for OpenAI-Compatible APIs

Model Client Cache

Assistant Agent

Multi-Modal Agent

User Proxy

RAG Agent

Conversable Agent and Register Reply

Save and Load Agent State

Two-Agent Chat

Tool Use

Chat Result

Conversion between v0.2 and v0.4 Messages

Group Chat

Group Chat with Resume

Save and Load Group Chat State

Group Chat with Tool Use

Group Chat with Custom Selector (Stateflow)

Nested Chat

Sequential Chat

GPTAssistantAgent

Long Context Handling

Observability and Control

Code Executors

The following features currently in v0.2 will be provided in the future releases of v0.4.* versions:

Model Client Cost #4835

Teachable Agent

RAG Agent

We will update this guide when the missing features become available.

Model Client
In v0.2 you configure the model client as follows, and create the OpenAIWrapper object.

from autogen.oai import OpenAIWrapper

config_list = [
{"model": "gpt-4o", "api_key": "sk-xxx"},
{"model": "gpt-4o-mini", "api_key": "sk-xxx"},
]

model_client = OpenAIWrapper(config_list=config_list)
Note: In AutoGen 0.2, the OpenAI client would try configs in the list until one worked. 0.4 instead expects a specfic model configuration to be chosen.

In v0.4, we offer two ways to create a model client.

Use component config
AutoGen 0.4 has a generic component configuration system. Model clients are a great use case for this. See below for how to create an OpenAI chat completion client.

from autogen_core.models import ChatCompletionClient

config = {
"provider": "OpenAIChatCompletionClient",
"config": {
"model": "gpt-4o",
"api_key": "sk-xxx" # os.environ["...']
}
}

model_client = ChatCompletionClient.load_component(config)
Use model client class directly
Open AI:

from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(model="gpt-4o", api_key="sk-xxx")
Azure OpenAI:

from autogen_ext.models.openai import AzureOpenAIChatCompletionClient

model_client = AzureOpenAIChatCompletionClient(
azure_deployment="gpt-4o",
azure_endpoint="https://.openai.azure.com/",
model="gpt-4o",
api_version="2024-09-01-preview",
api_key="sk-xxx",
)
Read more on OpenAIChatCompletionClient.

Model Client for OpenAI-Compatible APIs
You can use a the OpenAIChatCompletionClient to connect to an OpenAI-Compatible API, but you need to specify the base_url and model_info.

from autogen_ext.models.openai import OpenAIChatCompletionClient

custom_model_client = OpenAIChatCompletionClient(
model="custom-model-name",
base_url="https://custom-model.com/reset/of/the/path",
api_key="placeholder",
model_info={
"vision": True,
"function_calling": True,
"json_output": True,
"family": "unknown",
"structured_output": True,
},
)
Note: We don’t test all the OpenAI-Compatible APIs, and many of them works differently from the OpenAI API even though they may claim to suppor it. Please test them before using them.

Read about Model Clients in AgentChat Tutorial and more detailed information on Core API Docs

Support for other hosted models will be added in the future.

Model Client Cache
In v0.2, you can set the cache seed through the cache_seed parameter in the LLM config. The cache is enabled by default.

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
"cache_seed": 42,
}
In v0.4, the cache is not enabled by default, to use it you need to use a ChatCompletionCache wrapper around the model client.

You can use a DiskCacheStore or RedisStore to store the cache.

pip install -U "autogen-ext[openai, diskcache, redis]"
Here’s an example of using diskcache for local caching:

import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache

async def main():
with tempfile.TemporaryDirectory() as tmpdirname:

Initialize the original client

openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

# Then initialize the CacheStore, in this case with diskcache.Cache.
# You can also use redis like:
# from autogen_ext.cache_store.redis import RedisStore
# import redis
# redis_instance = redis.Redis()
# cache_store = RedisCacheStore[CHAT_CACHE_VALUE_TYPE](redis_instance)
cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
cache_client = ChatCompletionCache(openai_model_client, cache_store)

response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response)  # Should print response from OpenAI
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response)  # Should print cached response
await openai_model_client.close()

asyncio.run(main())
Assistant Agent
In v0.2, you create an assistant agent as follows:

from autogen.agentchat import AssistantAgent

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
}

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config=llm_config,
)
In v0.4, it is similar, but you need to specify model_client instead of llm_config.

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(model="gpt-4o", api_key="sk-xxx", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
model_client=model_client,
)
However, the usage is somewhat different. In v0.4, instead of calling assistant.send, you call assistant.on_messages or assistant.on_messages_stream to handle incoming messages. Furthermore, the on_messages and on_messages_stream methods are asynchronous, and the latter returns an async generator to stream the inner thoughts of the agent.

Here is how you can call the assistant agent in v0.4 directly, continuing from the above example:

import asyncio
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.agents import AssistantAgent
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
model_client=model_client,
)

cancellation_token = CancellationToken()
response = await assistant.on_messages([TextMessage(content="Hello!", source="user")], cancellation_token)
print(response)

await model_client.close()
asyncio.run(main())
The CancellationToken can be used to cancel the request asynchronously when you call cancellation_token.cancel(), which will cause the await on the on_messages call to raise a CancelledError.

Read more on Agent Tutorial and AssistantAgent.

Multi-Modal Agent
The AssistantAgent in v0.4 supports multi-modal inputs if the model client supports it. The vision capability of the model client is used to determine if the agent supports multi-modal inputs.

import asyncio
from pathlib import Path
from autogen_agentchat.messages import MultiModalMessage
from autogen_agentchat.agents import AssistantAgent
from autogen_core import CancellationToken, Image
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
model_client=model_client,
)

cancellation_token = CancellationToken()
message = MultiModalMessage(
content=["Here is an image:", Image.from_file(Path("test.png"))],
source="user",
)
response = await assistant.on_messages([message], cancellation_token)
print(response)

await model_client.close()
asyncio.run(main())
User Proxy
In v0.2, you create a user proxy as follows:

from autogen.agentchat import UserProxyAgent

user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
code_execution_config=False,
llm_config=False,
)
This user proxy would take input from the user through console, and would terminate if the incoming message ends with “TERMINATE”.

In v0.4, a user proxy is simply an agent that takes user input only, there is no other special configuration needed. You can create a user proxy as follows:

from autogen_agentchat.agents import UserProxyAgent

user_proxy = UserProxyAgent("user_proxy")
See UserProxyAgent for more details and how to customize the input function with timeout.

RAG Agent
In v0.2, there was the concept of teachable agents as well as a RAG agents that could take a database config.

teachable_agent = ConversableAgent(
name="teachable_agent",
llm_config=llm_config
)

Instantiate a Teachability object. Its parameters are all optional.
teachability = Teachability(
reset_db=False,
path_to_db_dir="./tmp/interactive/teachability_db"
)

teachability.add_to_agent(teachable_agent)
In v0.4, you can implement a RAG agent using the Memory class. Specifically, you can define a memory store class, and pass that as a parameter to the assistant agent. See the Memory tutorial for more details.

This clear separation of concerns allows you to implement a memory store that uses any database or storage system you want (you have to inherit from the Memory class) and use it with an assistant agent. The example below shows how to use a ChromaDB vector memory store with the assistant agent. In addition, your application logic should determine how and when to add content to the memory store. For example, you may choose to call memory.add for every response from the assistant agent or use a separate LLM call to determine if the content should be added to the memory store.

...
example of a ChromaDBVectorMemory class
chroma_user_memory = ChromaDBVectorMemory(
config=PersistentChromaDBVectorMemoryConfig(
collection_name="preferences",
persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"),
k=2, # Return top k results
score_threshold=0.4, # Minimum similarity score
)
)

you can add logic such as a document indexer that adds content to the memory store
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o",
),
tools=[get_weather],
memory=[chroma_user_memory],
)
Conversable Agent and Register Reply
In v0.2, you can create a conversable agent and register a reply function as follows:

from typing import Any, Dict, List, Optional, Tuple, Union
from autogen.agentchat import ConversableAgent

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
}

conversable_agent = ConversableAgent(
name="conversable_agent",
system_message="You are a helpful assistant.",
llm_config=llm_config,
code_execution_config={"work_dir": "coding"},
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
)

def reply_func(
recipient: ConversableAgent,
messages: Optional[List[Dict]] = None,
sender: Optional[Agent] = None,
config: Optional[Any] = None,
) -> Tuple[bool, Union[str, Dict, None]]:

Custom reply logic here

return True, "Custom reply"

Register the reply function
conversable_agent.register_reply([ConversableAgent], reply_func, position=0)

NOTE: An async reply function will only be invoked with async send.
Rather than guessing what the reply_func does, all its parameters, and what the position should be, in v0.4, we can simply create a custom agent and implement the on_messages, on_reset, and produced_message_types methods.

from typing import Sequence
from autogen_core import CancellationToken
from autogen_agentchat.agents import BaseChatAgent
from autogen_agentchat.messages import TextMessage, BaseChatMessage
from autogen_agentchat.base import Response

class CustomAgent(BaseChatAgent):
async def on_messages(self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) -> Response:
return Response(chat_message=TextMessage(content="Custom reply", source=self.name))

async def on_reset(self, cancellation_token: CancellationToken) -> None:
pass

@Property
def produced_message_types(self) -> Sequence[type[BaseChatMessage]]:
return (TextMessage,)
You can then use the custom agent in the same way as the AssistantAgent. See Custom Agent Tutorial for more details.

Save and Load Agent State
In v0.2 there is no built-in way to save and load an agent’s state: you need to implement it yourself by exporting the chat_messages attribute of ConversableAgent and importing it back through the chat_messages parameter.

In v0.4, you can call save_state and load_state methods on agents to save and load their state.

import asyncio
import json
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.agents import AssistantAgent
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
model_client=model_client,
)

cancellation_token = CancellationToken()
response = await assistant.on_messages([TextMessage(content="Hello!", source="user")], cancellation_token)
print(response)

Save the state.

state = await assistant.save_state()

(Optional) Write state to disk.

with open("assistant_state.json", "w") as f:
json.dump(state, f)

(Optional) Load it back from disk.

with open("assistant_state.json", "r") as f:
state = json.load(f)
print(state) # Inspect the state, which contains the chat history.

Carry on the chat.

response = await assistant.on_messages([TextMessage(content="Tell me a joke.", source="user")], cancellation_token)
print(response)

Load the state, resulting the agent to revert to the previous state before the last message.

await assistant.load_state(state)

Carry on the same chat again.

response = await assistant.on_messages([TextMessage(content="Tell me a joke.", source="user")], cancellation_token)

Close the connection to the model client.

await model_client.close()
asyncio.run(main())
You can also call save_state and load_state on any teams, such as RoundRobinGroupChat to save and load the state of the entire team.

Two-Agent Chat
In v0.2, you can create a two-agent chat for code execution as follows:

from autogen.coding import LocalCommandLineCodeExecutor
from autogen.agentchat import AssistantAgent, UserProxyAgent

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
}

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant. Write all code in python. Reply only 'TERMINATE' if the task is done.",
llm_config=llm_config,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
)

user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
code_execution_config={"code_executor": LocalCommandLineCodeExecutor(work_dir="coding")},
llm_config=False,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
)

chat_result = user_proxy.initiate_chat(assistant, message="Write a python script to print 'Hello, world!'")

Intermediate messages are printed to the console directly.
print(chat_result)
To get the same behavior in v0.4, you can use the AssistantAgent and CodeExecutorAgent together in a RoundRobinGroupChat.

import asyncio
from autogen_agentchat.agents import AssistantAgent, CodeExecutorAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_agentchat.ui import Console
from autogen_ext.code_executors.local import LocalCommandLineCodeExecutor
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant. Write all code in python. Reply only 'TERMINATE' if the task is done.",
model_client=model_client,
)

code_executor = CodeExecutorAgent(
name="code_executor",
code_executor=LocalCommandLineCodeExecutor(work_dir="coding"),
)

The termination condition is a combination of text termination and max message termination, either of which will cause the chat to terminate.

termination = TextMentionTermination("TERMINATE") | MaxMessageTermination(10)

The group chat will alternate between the assistant and the code executor.

group_chat = RoundRobinGroupChat([assistant, code_executor], termination_condition=termination)

run_stream returns an async generator to stream the intermediate messages.

stream = group_chat.run_stream(task="Write a python script to print 'Hello, world!'")

Console is a simple UI to display the stream.

await Console(stream)

Close the connection to the model client.

await model_client.close()
asyncio.run(main())
Tool Use
In v0.2, to create a tool use chatbot, you must have two agents, one for calling the tool and one for executing the tool. You need to initiate a two-agent chat for every user request.

from autogen.agentchat import AssistantAgent, UserProxyAgent, register_function

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
}

tool_caller = AssistantAgent(
name="tool_caller",
system_message="You are a helpful assistant. You can call tools to help user.",
llm_config=llm_config,
max_consecutive_auto_reply=1, # Set to 1 so that we return to the application after each assistant reply as we are building a chatbot.
)

tool_executor = UserProxyAgent(
name="tool_executor",
human_input_mode="NEVER",
code_execution_config=False,
llm_config=False,
)

def get_weather(city: str) -> str:
return f"The weather in {city} is 72 degree and sunny."

Register the tool function to the tool caller and executor.
register_function(get_weather, caller=tool_caller, executor=tool_executor)

while True:
user_input = input("User: ")
if user_input == "exit":
break
chat_result = tool_executor.initiate_chat(
tool_caller,
message=user_input,
summary_method="reflection_with_llm", # To let the model reflect on the tool use, set to "last_msg" to return the tool call result directly.
)
print("Assistant:", chat_result.summary)
In v0.4, you really just need one agent – the AssistantAgent – to handle both the tool calling and tool execution.

import asyncio
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage

def get_weather(city: str) -> str: # Async tool is possible too.
return f"The weather in {city} is 72 degree and sunny."

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)
assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant. You can call tools to help user.",
model_client=model_client,
tools=[get_weather],
reflect_on_tool_use=True, # Set to True to have the model reflect on the tool use, set to False to return the tool call result directly.
)
while True:
user_input = input("User: ")
if user_input == "exit":
break
response = await assistant.on_messages([TextMessage(content=user_input, source="user")], CancellationToken())
print("Assistant:", response.chat_message.to_text())
await model_client.close()

asyncio.run(main())
When using tool-equipped agents inside a group chat such as RoundRobinGroupChat, you simply do the same as above to add tools to the agents, and create a group chat with the agents.

Chat Result
In v0.2, you get a ChatResult object from the initiate_chat method. For example:

chat_result = tool_executor.initiate_chat(
tool_caller,
message=user_input,
summary_method="reflection_with_llm",
)
print(chat_result.summary) # Get LLM-reflected summary of the chat.
print(chat_result.chat_history) # Get the chat history.
print(chat_result.cost) # Get the cost of the chat.
print(chat_result.human_input) # Get the human input solicited by the chat.
See ChatResult Docs for more details.

In v0.4, you get a TaskResult object from a run or run_stream method. The TaskResult object contains the messages which is the message history of the chat, including both agents’ private (tool calls, etc.) and public messages.

There are some notable differences between TaskResult and ChatResult:

The messages list in TaskResult uses different message format than the ChatResult.chat_history list.

There is no summary field. It is up to the application to decide how to summarize the chat using the messages list.

human_input is not provided in the TaskResult object, as the user input can be extracted from the messages list by filtering with the source field.

cost is not provided in the TaskResult object, however, you can calculate the cost based on token usage. It would be a great community extension to add cost calculation. See community extensions.

Conversion between v0.2 and v0.4 Messages
You can use the following conversion functions to convert between a v0.4 message in autogen_agentchat.base.TaskResult.messages and a v0.2 message in ChatResult.chat_history.

from typing import Any, Dict, List, Literal

from autogen_agentchat.messages import (
BaseAgentEvent,
BaseChatMessage,
HandoffMessage,
MultiModalMessage,
StopMessage,
TextMessage,
ToolCallExecutionEvent,
ToolCallRequestEvent,
ToolCallSummaryMessage,
)
from autogen_core import FunctionCall, Image
from autogen_core.models import FunctionExecutionResult

def convert_to_v02_message(
message: BaseAgentEvent | BaseChatMessage,
role: Literal["assistant", "user", "tool"],
image_detail: Literal["auto", "high", "low"] = "auto",
) -> Dict[str, Any]:
"""Convert a v0.4 AgentChat message to a v0.2 message.

Args:
message (BaseAgentEvent | BaseChatMessage): The message to convert.
role (Literal["assistant", "user", "tool"]): The role of the message.
image_detail (Literal["auto", "high", "low"], optional): The detail level of image content in multi-modal message. Defaults to "auto".

Returns:
Dict[str, Any]: The converted AutoGen v0.2 message.
"""
v02_message: Dict[str, Any] = {}
if isinstance(message, TextMessage | StopMessage | HandoffMessage | ToolCallSummaryMessage):
v02_message = {"content": message.content, "role": role, "name": message.source}
elif isinstance(message, MultiModalMessage):
v02_message = {"content": [], "role": role, "name": message.source}
for modal in message.content:
if isinstance(modal, str):
v02_message["content"].append({"type": "text", "text": modal})
elif isinstance(modal, Image):
v02_message["content"].append(modal.to_openai_format(detail=image_detail))
else:
raise ValueError(f"Invalid multimodal message content: {modal}")
elif isinstance(message, ToolCallRequestEvent):
v02_message = {"tool_calls": [], "role": "assistant", "content": None, "name": message.source}
for tool_call in message.content:
v02_message["tool_calls"].append(
{
"id": tool_call.id,
"type": "function",
"function": {"name": tool_call.name, "args": tool_call.arguments},
}
)
elif isinstance(message, ToolCallExecutionEvent):
tool_responses: List[Dict[str, str]] = []
for tool_result in message.content:
tool_responses.append(
{
"tool_call_id": tool_result.call_id,
"role": "tool",
"content": tool_result.content,
}
)
content = "\n\n".join([response["content"] for response in tool_responses])
v02_message = {"tool_responses": tool_responses, "role": "tool", "content": content}
else:
raise ValueError(f"Invalid message type: {type(message)}")
return v02_message
def convert_to_v04_message(message: Dict[str, Any]) -> BaseAgentEvent | BaseChatMessage:
"""Convert a v0.2 message to a v0.4 AgentChat message."""
if "tool_calls" in message:
tool_calls: List[FunctionCall] = []
for tool_call in message["tool_calls"]:
tool_calls.append(
FunctionCall(
id=tool_call["id"],
name=tool_call["function"]["name"],
arguments=tool_call["function"]["args"],
)
)
return ToolCallRequestEvent(source=message["name"], content=tool_calls)
elif "tool_responses" in message:
tool_results: List[FunctionExecutionResult] = []
for tool_response in message["tool_responses"]:
tool_results.append(
FunctionExecutionResult(
call_id=tool_response["tool_call_id"],
content=tool_response["content"],
is_error=False,
name=tool_response["name"],
)
)
return ToolCallExecutionEvent(source="tools", content=tool_results)
elif isinstance(message["content"], list):
content: List[str | Image] = []
for modal in message["content"]: # type: ignore
if modal["type"] == "text": # type: ignore
content.append(modal["text"]) # type: ignore
else:
content.append(Image.from_uri(modal["image_url"]["url"])) # type: ignore
return MultiModalMessage(content=content, source=message["name"])
elif isinstance(message["content"], str):
return TextMessage(content=message["content"], source=message["name"])
else:
raise ValueError(f"Unable to convert message: {message}")
Group Chat
In v0.2, you need to create a GroupChat class and pass it into a GroupChatManager, and have a participant that is a user proxy to initiate the chat. For a simple scenario of a writer and a critic, you can do the following:

from autogen.agentchat import AssistantAgent, GroupChat, GroupChatManager

llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "sk-xxx"}],
"seed": 42,
"temperature": 0,
}

writer = AssistantAgent(
name="writer",
description="A writer.",
system_message="You are a writer.",
llm_config=llm_config,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("APPROVE"),
)

critic = AssistantAgent(
name="critic",
description="A critic.",
system_message="You are a critic, provide feedback on the writing. Reply only 'APPROVE' if the task is done.",
llm_config=llm_config,
)

Create a group chat with the writer and critic.
groupchat = GroupChat(agents=[writer, critic], messages=[], max_round=12)

Create a group chat manager to manage the group chat, use round-robin selection method.
manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config, speaker_selection_method="round_robin")

Initiate the chat with the editor, intermediate messages are printed to the console directly.
result = editor.initiate_chat(
manager,
message="Write a short story about a robot that discovers it has feelings.",
)
print(result.summary)
In v0.4, you can use the RoundRobinGroupChat to achieve the same behavior.

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

writer = AssistantAgent(
name="writer",
description="A writer.",
system_message="You are a writer.",
model_client=model_client,
)

critic = AssistantAgent(
name="critic",
description="A critic.",
system_message="You are a critic, provide feedback on the writing. Reply only 'APPROVE' if the task is done.",
model_client=model_client,
)

The termination condition is a text termination, which will cause the chat to terminate when the text "APPROVE" is received.

termination = TextMentionTermination("APPROVE")

The group chat will alternate between the writer and the critic.

group_chat = RoundRobinGroupChat([writer, critic], termination_condition=termination, max_turns=12)

run_stream returns an async generator to stream the intermediate messages.

stream = group_chat.run_stream(task="Write a short story about a robot that discovers it has feelings.")

Console is a simple UI to display the stream.

await Console(stream)

Close the connection to the model client.

await model_client.close()
asyncio.run(main())
For LLM-based speaker selection, you can use the SelectorGroupChat instead. See Selector Group Chat Tutorial and SelectorGroupChat for more details.

Note: In v0.4, you do not need to register functions on a user proxy to use tools in a group chat. You can simply pass the tool functions to the AssistantAgent as shown in the Tool Use section. The agent will automatically call the tools when needed. If your tool doesn’t output well formed response, you can use the reflect_on_tool_use parameter to have the model reflect on the tool use.

Group Chat with Resume
In v0.2, group chat with resume is a bit complicated. You need to explicitly save the group chat messages and load them back when you want to resume the chat. See Resuming Group Chat in v0.2 for more details.

In v0.4, you can simply call run or run_stream again with the same group chat object to resume the chat. To export and load the state, you can use save_state and load_state methods.

import asyncio
import json
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

def create_team(model_client : OpenAIChatCompletionClient) -> RoundRobinGroupChat:
writer = AssistantAgent(
name="writer",
description="A writer.",
system_message="You are a writer.",
model_client=model_client,
)

critic = AssistantAgent(
name="critic",
description="A critic.",
system_message="You are a critic, provide feedback on the writing. Reply only 'APPROVE' if the task is done.",
model_client=model_client,
)

The termination condition is a text termination, which will cause the chat to terminate when the text "APPROVE" is received.

termination = TextMentionTermination("APPROVE")

The group chat will alternate between the writer and the critic.

group_chat = RoundRobinGroupChat([writer, critic], termination_condition=termination)

return group_chat
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

Create team.

group_chat = create_team(model_client)

run_stream returns an async generator to stream the intermediate messages.

stream = group_chat.run_stream(task="Write a short story about a robot that discovers it has feelings.")

Console is a simple UI to display the stream.

await Console(stream)

Save the state of the group chat and all participants.

state = await group_chat.save_state()
with open("group_chat_state.json", "w") as f:
json.dump(state, f)

Create a new team with the same participants configuration.

group_chat = create_team(model_client)

Load the state of the group chat and all participants.

with open("group_chat_state.json", "r") as f:
state = json.load(f)
await group_chat.load_state(state)

Resume the chat.

stream = group_chat.run_stream(task="Translate the story into Chinese.")
await Console(stream)

Close the connection to the model client.

await model_client.close()
asyncio.run(main())
Save and Load Group Chat State
In v0.2, you need to explicitly save the group chat messages and load them back when you want to resume the chat.

In v0.4, you can simply call save_state and load_state methods on the group chat object. See Group Chat with Resume for an example.

Group Chat with Tool Use
In v0.2 group chat, when tools are involved, you need to register the tool functions on a user proxy, and include the user proxy in the group chat. The tool calls made by other agents will be routed to the user proxy to execute.

We have observed numerous issues with this approach, such as the the tool call routing not working as expected, and the tool call request and result cannot be accepted by models without support for function calling.

In v0.4, there is no need to register the tool functions on a user proxy, as the tools are directly executed within the AssistantAgent, which publishes the response from the tool to the group chat. So the group chat manager does not need to be involved in routing tool calls.

See Selector Group Chat Tutorial for an example of using tools in a group chat.

Group Chat with Custom Selector (Stateflow)
In v0.2 group chat, when the speaker_selection_method is set to a custom function, it can override the default selection method. This is useful for implementing a state-based selection method. For more details, see Custom Sepaker Selection in v0.2.

In v0.4, you can use the SelectorGroupChat with selector_func to achieve the same behavior. The selector_func is a function that takes the current message thread of the group chat and returns the next speaker’s name. If None is returned, the LLM-based selection method will be used.

Here is an example of using the state-based selection method to implement a web search/analysis scenario.

import asyncio
from typing import Sequence
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
from autogen_agentchat.messages import BaseAgentEvent, BaseChatMessage
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

Note: This example uses mock tools instead of real APIs for demonstration purposes
def search_web_tool(query: str) -> str:
if "2006-2007" in query:
return """Here are the total points scored by Miami Heat players in the 2006-2007 season:
Udonis Haslem: 844 points
Dwayne Wade: 1397 points
James Posey: 550 points
...
"""
elif "2007-2008" in query:
return "The number of total rebounds for Dwayne Wade in the Miami Heat season 2007-2008 is 214."
elif "2008-2009" in query:
return "The number of total rebounds for Dwayne Wade in the Miami Heat season 2008-2009 is 398."
return "No data found."

def percentage_change_tool(start: float, end: float) -> float:
return ((end - start) / start) * 100

def create_team(model_client : OpenAIChatCompletionClient) -> SelectorGroupChat:
planning_agent = AssistantAgent(
"PlanningAgent",
description="An agent for planning tasks, this agent should be the first to engage when given a new task.",
model_client=model_client,
system_message="""
You are a planning agent.
Your job is to break down complex tasks into smaller, manageable subtasks.
Your team members are:
Web search agent: Searches for information
Data analyst: Performs calculations

You only plan and delegate tasks - you do not execute them yourself.

When assigning tasks, use this format:
1. <agent> : <task>

After all tasks are complete, summarize the findings and end with "TERMINATE".
""",

)

web_search_agent = AssistantAgent(
"WebSearchAgent",
description="A web search agent.",
tools=[search_web_tool],
model_client=model_client,
system_message="""
You are a web search agent.
Your only tool is search_tool - use it to find information.
You make only one search call at a time.
Once you have the results, you never do calculations based on them.
""",
)

data_analyst_agent = AssistantAgent(
"DataAnalystAgent",
description="A data analyst agent. Useful for performing calculations.",
model_client=model_client,
tools=[percentage_change_tool],
system_message="""
You are a data analyst.
Given the tasks you have been assigned, you should analyze the data and provide results using the tools provided.
""",
)

The termination condition is a combination of text mention termination and max message termination.

text_mention_termination = TextMentionTermination("TERMINATE")
max_messages_termination = MaxMessageTermination(max_messages=25)
termination = text_mention_termination | max_messages_termination

The selector function is a function that takes the current message thread of the group chat

and returns the next speaker's name. If None is returned, the LLM-based selection method will be used.

def selector_func(messages: Sequence[BaseAgentEvent | BaseChatMessage]) -> str | None:
if messages[-1].source != planning_agent.name:
return planning_agent.name # Always return to the planning agent after the other agents have spoken.
return None

team = SelectorGroupChat(
[planning_agent, web_search_agent, data_analyst_agent],
model_client=OpenAIChatCompletionClient(model="gpt-4o-mini"), # Use a smaller model for the selector.
termination_condition=termination,
selector_func=selector_func,
)
return team
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
team = create_team(model_client)
task = "Who was the Miami Heat player with the highest points in the 2006-2007 season, and what was the percentage change in his total rebounds between the 2007-2008 and 2008-2009 seasons?"
await Console(team.run_stream(task=task))

asyncio.run(main())
Nested Chat
Nested chat allows you to nest a whole team or another agent inside an agent. This is useful for creating a hierarchical structure of agents or “information silos”, as the nested agents cannot communicate directly with other agents outside of the same group.

In v0.2, nested chat is supported by using the register_nested_chats method on the ConversableAgent class. You need to specify the nested sequence of agents using dictionaries, See Nested Chat in v0.2 for more details.

In v0.4, nested chat is an implementation detail of a custom agent. You can create a custom agent that takes a team or another agent as a parameter and implements the on_messages method to trigger the nested team or agent. It is up to the application to decide how to pass or transform the messages from and to the nested team or agent.

The following example shows a simple nested chat that counts numbers.

import asyncio
from typing import Sequence
from autogen_core import CancellationToken
from autogen_agentchat.agents import BaseChatAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.messages import TextMessage, BaseChatMessage
from autogen_agentchat.base import Response

class CountingAgent(BaseChatAgent):
"""An agent that returns a new number by adding 1 to the last number in the input messages."""
async def on_messages(self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) -> Response:
if len(messages) == 0:
last_number = 0 # Start from 0 if no messages are given.
else:
assert isinstance(messages[-1], TextMessage)
last_number = int(messages[-1].content) # Otherwise, start from the last number.
return Response(chat_message=TextMessage(content=str(last_number + 1), source=self.name))

async def on_reset(self, cancellation_token: CancellationToken) -> None:
pass

@Property
def produced_message_types(self) -> Sequence[type[BaseChatMessage]]:
return (TextMessage,)
class NestedCountingAgent(BaseChatAgent):
"""An agent that increments the last number in the input messages
multiple times using a nested counting team."""
def init(self, name: str, counting_team: RoundRobinGroupChat) -> None:
super().init(name, description="An agent that counts numbers.")
self._counting_team = counting_team

async def on_messages(self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) -> Response:
# Run the inner team with the given messages and returns the last message produced by the team.
result = await self._counting_team.run(task=messages, cancellation_token=cancellation_token)
# To stream the inner messages, implement on_messages_stream and use that to implement on_messages.
assert isinstance(result.messages[-1], TextMessage)
return Response(chat_message=result.messages[-1], inner_messages=result.messages[len(messages):-1])

async def on_reset(self, cancellation_token: CancellationToken) -> None:
# Reset the inner team.
await self._counting_team.reset()

@Property
def produced_message_types(self) -> Sequence[type[BaseChatMessage]]:
return (TextMessage,)
async def main() -> None:

Create a team of two counting agents as the inner team.

counting_agent_1 = CountingAgent("counting_agent_1", description="An agent that counts numbers.")
counting_agent_2 = CountingAgent("counting_agent_2", description="An agent that counts numbers.")
counting_team = RoundRobinGroupChat([counting_agent_1, counting_agent_2], max_turns=5)

Create a nested counting agent that takes the inner team as a parameter.

nested_counting_agent = NestedCountingAgent("nested_counting_agent", counting_team)

Run the nested counting agent with a message starting from 1.

response = await nested_counting_agent.on_messages([TextMessage(content="1", source="user")], CancellationToken())
assert response.inner_messages is not None
for message in response.inner_messages:
print(message)
print(response.chat_message)

asyncio.run(main())
You should see the following output:

source='counting_agent_1' models_usage=None content='2' type='TextMessage'
source='counting_agent_2' models_usage=None content='3' type='TextMessage'
source='counting_agent_1' models_usage=None content='4' type='TextMessage'
source='counting_agent_2' models_usage=None content='5' type='TextMessage'
source='counting_agent_1' models_usage=None content='6' type='TextMessage'
You can take a look at SocietyOfMindAgent for a more complex implementation.

Sequential Chat
In v0.2, sequential chat is supported by using the initiate_chats function. It takes input a list of dictionary configurations for each step of the sequence. See Sequential Chat in v0.2 for more details.

Base on the feedback from the community, the initiate_chats function is too opinionated and not flexible enough to support the diverse set of scenarios that users want to implement. We often find users struggling to get the initiate_chats function to work when they can easily glue the steps together usign basic Python code. Therefore, in v0.4, we do not provide a built-in function for sequential chat in the AgentChat API.

Instead, you can create an event-driven sequential workflow using the Core API, and use the other components provided the AgentChat API to implement each step of the workflow. See an example of sequential workflow in the Core API Tutorial.

We recognize that the concept of workflow is at the heart of many applications, and we will provide more built-in support for workflows in the future.

GPTAssistantAgent
In v0.2, GPTAssistantAgent is a special agent class that is backed by the OpenAI Assistant API.

In v0.4, the equivalent is the OpenAIAssistantAgent class. It supports the same set of features as the GPTAssistantAgent in v0.2 with more such as customizable threads and file uploads. See OpenAIAssistantAgent for more details.

Long Context Handling
In v0.2, long context that overflows the model’s context window can be handled by using the transforms capability that is added to an ConversableAgent after which is contructed.

The feedbacks from our community has led us to believe this feature is essential and should be a built-in component of AssistantAgent, and can be used for every custom agent.

In v0.4, we introduce the ChatCompletionContext base class that manages message history and provides a virtual view of the history. Applications can use built-in implementations such as BufferedChatCompletionContext to limit the message history sent to the model, or provide their own implementations that creates different virtual views.

To use BufferedChatCompletionContext in an AssistantAgent in a chatbot scenario.

import asyncio
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.agents import AssistantAgent
from autogen_core import CancellationToken
from autogen_core.model_context import BufferedChatCompletionContext
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o", seed=42, temperature=0)

assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
model_client=model_client,
model_context=BufferedChatCompletionContext(buffer_size=10), # Model can only view the last 10 messages.
)
while True:
user_input = input("User: ")
if user_input == "exit":
break
response = await assistant.on_messages([TextMessage(content=user_input, source="user")], CancellationToken())
print("Assistant:", response.chat_message.to_text())

await model_client.close()
asyncio.run(main())
In this example, the chatbot can only read the last 10 messages in the history.

Observability and Control
In v0.4 AgentChat, you can observe the agents by using the on_messages_stream method which returns an async generator to stream the inner thoughts and actions of the agent. For teams, you can use the run_stream method to stream the inner conversation among the agents in the team. Your application can use these streams to observe the agents and teams in real-time.

Both the on_messages_stream and run_stream methods takes a CancellationToken as a parameter which can be used to cancel the output stream asynchronously and stop the agent or team. For teams, you can also use termination conditions to stop the team when a certain condition is met. See Termination Condition Tutorial for more details.

Unlike the v0.2 which comes with a special logging module, the v0.4 API simply uses Python’s logging module to log events such as model client calls. See Logging in the Core API documentation for more details.

Code Executors
The code executors in v0.2 and v0.4 are nearly identical except the v0.4 executors support async API. You can also use CancellationToken to cancel a code execution if it takes too long. See Command Line Code Executors Tutorial in the Core API documentation.

We also added ACADynamicSessionsCodeExecutor that can use Azure Container Apps (ACA) dynamic sessions for code execution. See ACA Dynamic Sessions Code Executor Docs.

Use multi agents running parallel

Main code path is src/praisonai/praisonai

Summary by CodeRabbit

  • New Features

    • Added support for AutoGen framework version 0.4, with dynamic selection between v0.2 and v0.4 based on environment settings.
    • Enhanced agent execution with asynchronous processing when using AutoGen v0.4.
  • Bug Fixes

    • Improved error handling and user guidance for missing AutoGen dependencies, including clear installation instructions for both versions.
  • Chores

    • Updated optional dependencies to include new packages required for AutoGen v0.4 support.

- Add optional dependency ''autogen-v4'' with autogen-agentchat>=0.4.0, autogen-ext[openai]>=0.4.0, autogen-core>=0.4.0
- Implement async _run_autogen_v4() method with RoundRobinGroupChat for parallel multi-agent execution
- Add AUTOGEN_VERSION environment variable for version control (auto/v0.2/v0.4)
- Maintain full backward compatibility with existing v0.2 implementation
- Auto-detect available versions with intelligent fallback logic
- Simplify tool integration for v0.4 (single AssistantAgent vs dual-agent pattern)
- Add proper termination conditions and error handling for v0.4

Resolves #935

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 15, 2025

Walkthrough

The changes add support for AutoGen v0.4 alongside v0.2, enabling dynamic selection based on an environment variable. New asynchronous agent execution logic is introduced for v0.4, with corresponding dependency and import management. The codebase now supports both versions, updating initialization, tool integration, and error handling accordingly.

Changes

File(s) Change Summary
src/praisonai/praisonai/agents_generator.py Added support for AutoGen v0.4: version selection logic, new async execution method _run_autogen_v4, updated initialization and error handling.
src/praisonai/praisonai/auto.py Added imports and availability check for AutoGen v0.4, updated constructor logic, and improved warning messages.
src/praisonai/pyproject.toml Added new optional dependency group autogen-v4 with relevant v0.4 packages and updated extras.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AgentsGenerator
    participant AutoGenV2
    participant AutoGenV4

    User->>AgentsGenerator: generate_crew_and_kickoff(config, topic, tools)
    AgentsGenerator->>AgentsGenerator: Check AUTOGEN_VERSION env var
    alt AUTOGEN_VERSION == "v0.4" or v0.4 preferred
        AgentsGenerator->>AutoGenV4: _run_autogen_v4(...)
        AutoGenV4->>AutoGenV4: Async agent setup and execution
        AutoGenV4-->>AgentsGenerator: Result
    else AUTOGEN_VERSION == "v0.2" or fallback
        AgentsGenerator->>AutoGenV2: _run_autogen(...)
        AutoGenV2-->>AgentsGenerator: Result
    end
    AgentsGenerator-->>User: Final output
Loading

Assessment against linked issues

Objective Addressed Explanation
Migrate codebase to support AutoGen v0.4, including async architecture, agent creation, tool integration, and execution (#935)
Allow dynamic selection between v0.2 and v0.4 via environment variable (#935)
Update dependency management and installation instructions for v0.4 (#935)
Apply main migration and code updates in src/praisonai/praisonai code path (#935)

Suggested labels

Review effort 3/5

Poem

In the warren of code, two paths now appear,
AutoGen’s old and new, both are welcome here!
Async bunnies hop with v0.4’s delight,
While v0.2 still scurries, ready for the fight.
Tools in their paws, dependencies in line—
This rabbit’s migration is simply divine! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MervinPraison
Copy link
Owner Author

@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 15, 2025

Claude finished @MervinPraison's task —— View job


PR Review and Analysis - Critical Fix Applied ✅

I've completed my comprehensive analysis of PR #936 for AutoGen v0.4 support with backward compatibility and applied a critical fix.

Final Status: All Tasks Completed

  • Read repository guidelines (CLAUDE.md) - No file found
  • Analyze changed files for implementation completeness
  • Verify backward compatibility is maintained
  • Check dependency structure and version detection logic
  • Test the implementation if possible
  • Provide detailed review feedback
  • Apply minimal fixes if issues are found
  • Re-review any applied changes
  • Commit and push changes - Requires user action (see below)

✅ Issues Found and Fixed

🔴 Critical Issue Fixed - Version Fallback Logic

Location: agents_generator.py:348

Problem: The fallback logic was incorrect when both AutoGen v0.2 and v0.4 were available:

# BEFORE (incorrect):
use_v4 = AUTOGEN_V4_AVAILABLE and not AUTOGEN_AVAILABLE

Fix Applied:

# AFTER (correct):
use_v4 = AUTOGEN_V4_AVAILABLE

This ensures that v0.4 is preferred when available, aligning with the PR's intent and the "auto" mode behavior.

✅ Minor Improvement - AgentOps Tag Consistency

Location: agents_generator.py:351

Change: Updated AgentOps tags for better version consistency:

# BEFORE:
version_tag = "autogen-v4" if use_v4 else "autogen-v2"

# AFTER:  
version_tag = "autogen-v0.4" if use_v4 else "autogen-v0.2"

📋 Comprehensive Review Results

✅ Excellent Implementation Aspects

  1. Backward Compatibility: Full compatibility maintained - existing v0.2 code will continue working unchanged
  2. Dependency Structure: Well-organized optional dependencies in pyproject.toml
  3. Import Organization: Proper availability checks prevent import errors
  4. Error Handling: Comprehensive exception handling in async implementation
  5. Environment Variable Support: Flexible AUTOGEN_VERSION control (auto/v0.2/v0.4)
  6. Documentation: Clear inline comments explaining version selection logic

✅ Technical Implementation Quality

  1. Async Implementation: Proper async/await pattern using asyncio.run()
  2. Resource Management: Model client properly closed in finally block
  3. Tool Integration: Simplified v0.4 tool pattern (single AssistantAgent vs dual-agent)
  4. Termination Conditions: Both text-based and max-message termination implemented
  5. Framework Detection: Robust availability checks for all components

✅ Code Organization

  1. Separation of Concerns: Clear separation between v0.2 (_run_autogen) and v0.4 (_run_autogen_v4) methods
  2. Configuration Handling: Consistent model client configuration
  3. Logging: Appropriate debug and info logging throughout

⚠️ Minor Considerations (Not blocking)

  1. Tool Method Assumption: V0.4 tool integration assumes tools have run() method - should be validated but follows PraisonAI tool patterns
  2. AgentOps Integration: Well implemented but could benefit from v0.4-specific features in future iterations

🎯 Final Assessment

Status: ✅ APPROVED with Critical Fix Applied

This PR successfully implements AutoGen v0.4 support while maintaining full backward compatibility. The implementation follows best practices for:

  • Version detection and fallback logic
  • Async/await patterns
  • Error handling
  • Resource management
  • Dependency management

The critical version fallback logic issue has been fixed, ensuring the intended v0.4 preference works correctly.

🚀 Next Steps

Files Modified:

  • src/praisonai/praisonai/agents_generator.py (critical fix + improvement)

User Action Required:
To complete the process, please commit the changes:

git add src/praisonai/praisonai/agents_generator.py
git commit -m "fix: correct AutoGen version fallback logic to prefer v0.4

- Fix incorrect fallback logic that would choose v0.2 over v0.4 when both available
- Improve AgentOps tag consistency for better version tracking

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>"
git push origin HEAD

Recommendation: ✅ Ready to merge after committing the applied fixes.


🔧 Changes Applied: Fixed critical version fallback logic and improved AgentOps tag consistency
📁 Files Modified: 1 file
🎯 Impact: Critical fix ensures proper v0.4 preference, maintaining PR intent

--- | Branch: claude/issue-935-20250715_170032

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the project's agent framework capabilities by integrating support for AutoGen v0.4, a major rewrite of the AutoGen library. It ensures seamless transition and continued functionality for users of the older v0.2 version, while providing flexible version control and leveraging the new asynchronous features of v0.4 for enhanced agent orchestration and tool handling.

Highlights

  • AutoGen v0.4 Support: Added comprehensive support for AutoGen v0.4, leveraging its new asynchronous and event-driven architecture for agent interactions.
  • Backward Compatibility: Ensured full backward compatibility with existing AutoGen v0.2 implementations, allowing users to continue using older setups.
  • Flexible Version Selection: Introduced an AUTOGEN_VERSION environment variable (auto/v0.2/v0.4) and intelligent fallback logic to automatically detect and prioritize AutoGen versions based on availability and user preference.
  • Simplified Tool Integration: Streamlined tool integration for AutoGen v0.4, moving from a dual-agent pattern to a single AssistantAgent for tool calling and execution.
  • Asynchronous Agent Execution: Implemented an async _run_autogen_v4() method utilizing RoundRobinGroupChat for parallel multi-agent execution, along with proper termination conditions and error handling.
  • New Installation Option: Defined a new optional dependency group autogen-v4 in pyproject.toml for installing AutoGen v0.4 components (autogen-agentchat, autogen-ext[openai], autogen-core).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for AutoGen v0.4 while maintaining backward compatibility. The changes are well-structured, including dependency updates, version detection logic, and a new execution path for v0.4. The implementation of _run_autogen_v4 using an async approach is a key part of this update. I've identified a few areas for improvement, including making the version selection logic more explicit, enhancing the safety of the async execution, and addressing some hardcoded values. Overall, this is a great contribution.

Comment on lines +545 to +549
try:
return asyncio.run(run_autogen_v4_async())
except Exception as e:
self.logger.error(f"Error running AutoGen v0.4: {str(e)}")
return f"### AutoGen v0.4 Error ###\n{str(e)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using asyncio.run() can cause a RuntimeError if an asyncio event loop is already running. To make this more robust, check for a running event loop and handle it appropriately.

        try:
            try:
                loop = asyncio.get_running_loop()
            except RuntimeError:
                loop = None

            if loop and loop.is_running():
                future = asyncio.run_coroutine_threadsafe(run_autogen_v4_async(), loop)
                return future.result()
            else:
                return asyncio.run(run_autogen_v4_async())
        except Exception as e:
            self.logger.error(f"Error running AutoGen v0.4: {str(e)}")
            return f"### AutoGen v0.4 Error ###\n{str(e)}"

Comment on lines +338 to +348
use_v4 = False
if autogen_version == "v0.4" and AUTOGEN_V4_AVAILABLE:
use_v4 = True
elif autogen_version == "v0.2" and AUTOGEN_AVAILABLE:
use_v4 = False
elif autogen_version == "auto":
# Default preference: use v0.4 if available, fallback to v0.2
use_v4 = AUTOGEN_V4_AVAILABLE
else:
# Fallback to whatever is available
use_v4 = AUTOGEN_V4_AVAILABLE and not AUTOGEN_AVAILABLE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for selecting the AutoGen version can be made more explicit and robust. Raise an error if a specific version is requested via AUTOGEN_VERSION but is not available. Log a warning and default to the 'auto' behavior if an invalid version string is provided.

            use_v4 = False
            if autogen_version == "v0.4":
                if not AUTOGEN_V4_AVAILABLE:
                    raise ImportError("AUTOGEN_VERSION is set to 'v0.4', but it is not installed. Please install with 'pip install praisonai[autogen-v4]'")
                use_v4 = True
            elif autogen_version == "v0.2":
                if not AUTOGEN_AVAILABLE:
                    raise ImportError("AUTOGEN_VERSION is set to 'v0.2', but it is not installed. Please install with 'pip install praisonai[autogen]'")
                use_v4 = False
            elif autogen_version == "auto":
                use_v4 = AUTOGEN_V4_AVAILABLE
            else:
                self.logger.warning(f"Invalid AUTOGEN_VERSION: '{autogen_version}'. Defaulting to 'auto' behavior.")
                use_v4 = AUTOGEN_V4_AVAILABLE

if AGENTOPS_AVAILABLE:
agentops.init(os.environ.get("AGENTOPS_API_KEY"), default_tags=["autogen"])
return self._run_autogen(config, topic, tools_dict)
version_tag = "autogen-v4" if use_v4 else "autogen-v2"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The version tag for AutoGen v0.2 is set to autogen-v2. For consistency with the dependency name (pyautogen==0.2.29) and documentation, it would be clearer to use autogen-v0.2.

                version_tag = "autogen-v4" if use_v4 else "autogen-v0.2"


async def run_autogen_v4_async():
# Create model client for v0.4
model_config = self.config_list[0] if self.config_list else {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code accesses self.config_list[0] without first checking if self.config_list is empty. It's safer to explicitly check for an empty list and raise a more informative error.

            if not self.config_list:
                raise ValueError("LLM configuration `config_list` is missing.")
            model_config = self.config_list[0]


# Create agents from config
for role, details in config['roles'].items():
agent_name = details['role'].format(topic=topic).replace("{topic}", topic)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .replace("{topic}", topic) call is redundant because .format(topic=topic) already replaces all occurrences of {topic}. Removing the extra call makes the code cleaner and slightly more efficient.

                agent_name = details['role'].format(topic=topic)

group_chat = RoundRobinGroupChat(
agents,
termination_condition=termination_condition,
max_turns=len(agents) * 3 # Allow multiple rounds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The max_turns for the RoundRobinGroupChat is calculated using a magic number (len(agents) * 3). It would be better to make this configurable, for example, by adding a max_turns or a turns_multiplier setting in the agents.yaml configuration file. For now, defining it as a constant at the top of the method would be a small improvement.

# At the top of _run_autogen_v4
DEFAULT_TURNS_MULTIPLIER = 3
...
# In group_chat creation
group_chat = RoundRobinGroupChat(
    agents,
    termination_condition=termination_condition,
    max_turns=len(agents) * DEFAULT_TURNS_MULTIPLIER
)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/praisonai/praisonai/agents_generator.py (1)

446-550: Solid async implementation for AutoGen v0.4

The implementation correctly uses the new v0.4 async APIs with proper error handling and resource cleanup. Consider these minor improvements:

  1. The loop control variables role (line 473) and task_name (line 498) are not used within their respective loop bodies.

Apply this diff to address the unused loop variables:

-            for role, details in config['roles'].items():
+            for _, details in config['roles'].items():
-                for task_name, task_details in details.get('tasks', {}).items():
+                for _, task_details in details.get('tasks', {}).items():
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b35d15f and 8d2e261.

📒 Files selected for processing (3)
  • src/praisonai/praisonai/agents_generator.py (5 hunks)
  • src/praisonai/praisonai/auto.py (3 hunks)
  • src/praisonai/pyproject.toml (3 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
src/praisonai/pyproject.toml (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
src/praisonai/praisonai/auto.py (2)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
src/praisonai/praisonai/agents_generator.py (2)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
🪛 Ruff (0.12.2)
src/praisonai/praisonai/auto.py

36-36: autogen_agentchat.agents.AssistantAgent imported but unused; consider using importlib.util.find_spec to test for availability

(F401)


37-37: autogen_ext.models.openai.OpenAIChatCompletionClient imported but unused; consider using importlib.util.find_spec to test for availability

(F401)

src/praisonai/praisonai/agents_generator.py

50-50: autogen_agentchat.messages.TextMessage imported but unused; consider using importlib.util.find_spec to test for availability

(F401)


51-51: autogen_core.CancellationToken imported but unused; consider using importlib.util.find_spec to test for availability

(F401)


473-473: Loop control variable role not used within loop body

Rename unused role to _role

(B007)


498-498: Loop control variable task_name not used within loop body

Rename unused task_name to _task_name

(B007)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Cursor BugBot
  • GitHub Check: test-core (3.11)
  • GitHub Check: quick-test
  • GitHub Check: Run tests and collect coverage
🔇 Additional comments (4)
src/praisonai/pyproject.toml (1)

95-101: LGTM! Well-structured dependency group for AutoGen v0.4

The new autogen-v4 optional dependency group is properly configured with the required packages for v0.4 support while maintaining the existing tool dependencies.

src/praisonai/praisonai/auto.py (1)

83-88: Clear and helpful error messages for multi-version support

The updated error handling correctly checks for either AutoGen version and provides specific installation instructions for both v0.2 and v0.4.

src/praisonai/praisonai/agents_generator.py (2)

334-349: Well-designed version selection logic with sensible defaults

The implementation correctly handles version selection based on the AUTOGEN_VERSION environment variable with appropriate fallback behavior. The "auto" mode's preference for v0.4 aligns with the PR's goal of adopting the newer version while maintaining compatibility.


479-485: Clean tool integration for v0.4

The simplified tool integration using callable run methods is much cleaner than the v0.2 dual-agent pattern. This aligns well with the PR's goal of leveraging v0.4's improved architecture.

@MervinPraison
Copy link
Owner Author

@claude review if the changes made in the above branch are as per the requirements and make changes to the above branch if issues still exist.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 15, 2025

Claude finished @MervinPraison's task —— View job


PR Review and Analysis - Critical Fix Applied ✅

I've completed my comprehensive analysis of PR #936 for AutoGen v0.4 support with backward compatibility and applied a critical fix.

Final Status: All Tasks Completed

  • Read repository guidelines (CLAUDE.md) - No file found
  • Analyze changed files for implementation completeness
  • Verify backward compatibility is maintained
  • Check dependency structure and version detection logic
  • Test the implementation if possible
  • Provide detailed review feedback
  • Apply minimal fixes if issues are found
  • Re-review any applied changes
  • Commit and push changes - Requires user action (see below)

✅ Issues Found and Fixed

🔴 Critical Issue Fixed - Version Fallback Logic

Location: agents_generator.py:348

Problem: The fallback logic was incorrect when both AutoGen v0.2 and v0.4 were available:

# BEFORE (incorrect):
use_v4 = AUTOGEN_V4_AVAILABLE and not AUTOGEN_AVAILABLE

Fix Applied:

# AFTER (correct):
use_v4 = AUTOGEN_V4_AVAILABLE

This ensures that v0.4 is preferred when available, aligning with the PR's intent and the "auto" mode behavior.

✅ Minor Improvement - AgentOps Tag Consistency

Location: agents_generator.py:351

Change: Updated AgentOps tags for better version consistency:

# BEFORE:
version_tag = "autogen-v4" if use_v4 else "autogen-v2"

# AFTER:  
version_tag = "autogen-v0.4" if use_v4 else "autogen-v0.2"

📋 Comprehensive Review Results

✅ Excellent Implementation Aspects

  1. Backward Compatibility: Full compatibility maintained - existing v0.2 code will continue working unchanged
  2. Dependency Structure: Well-organized optional dependencies in pyproject.toml
  3. Import Organization: Proper availability checks prevent import errors
  4. Error Handling: Comprehensive exception handling in async implementation
  5. Environment Variable Support: Flexible AUTOGEN_VERSION control (auto/v0.2/v0.4)
  6. Documentation: Clear inline comments explaining version selection logic

✅ Technical Implementation Quality

  1. Async Implementation: Proper async/await pattern using asyncio.run()
  2. Resource Management: Model client properly closed in finally block
  3. Tool Integration: Simplified v0.4 tool pattern (single AssistantAgent vs dual-agent)
  4. Termination Conditions: Both text-based and max-message termination implemented
  5. Framework Detection: Robust availability checks for all components

✅ Code Organization

  1. Separation of Concerns: Clear separation between v0.2 (_run_autogen) and v0.4 (_run_autogen_v4) methods
  2. Configuration Handling: Consistent model client configuration
  3. Logging: Appropriate debug and info logging throughout

⚠️ Minor Considerations (Not blocking)

  1. Tool Method Assumption: V0.4 tool integration assumes tools have run() method - should be validated but follows PraisonAI tool patterns
  2. AgentOps Integration: Well implemented but could benefit from v0.4-specific features in future iterations

🎯 Final Assessment

Status: ✅ APPROVED with Critical Fix Applied

This PR successfully implements AutoGen v0.4 support while maintaining full backward compatibility. The implementation follows best practices for:

  • Version detection and fallback logic
  • Async/await patterns
  • Error handling
  • Resource management
  • Dependency management

The critical version fallback logic issue has been fixed, ensuring the intended v0.4 preference works correctly.

🚀 Next Steps

Files Modified:

  • src/praisonai/praisonai/agents_generator.py (critical fix + improvement)

User Action Required:
To complete the process, please commit the changes:

git add src/praisonai/praisonai/agents_generator.py
git commit -m "fix: correct AutoGen version fallback logic to prefer v0.4

- Fix incorrect fallback logic that would choose v0.2 over v0.4 when both available
- Improve AgentOps tag consistency for better version tracking

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>"
git push origin HEAD

Recommendation: ✅ Ready to merge after committing the applied fixes.


🔧 Changes Applied: Fixed critical version fallback logic and improved AgentOps tag consistency
📁 Files Modified: 1 file
🎯 Impact: Critical fix ensures proper v0.4 preference, maintaining PR intent

--- | Branch: claude/issue-935-20250715_170032

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug

The newly introduced _run_autogen_v4 method has three issues:

  1. AgentOps Session Leak: It fails to call agentops.end_session(), unlike other framework execution paths. This leaves AgentOps sessions open, causing inaccurate telemetry and resource leaks.
  2. Event Loop Conflict: It unconditionally calls asyncio.run() within a synchronous context. This raises a RuntimeError: asyncio.run() cannot be called from a running event loop when generate_crew_and_kickoff() is invoked from an existing event loop (e.g., in FastAPI, Jupyter, or other async environments), breaking the AutoGen v0.4 workflow. Library code should instead detect an active loop and use await or asyncio.create_task.
  3. Tool Incompatibility: It only adds tools if they possess a callable .run attribute. This silently drops plain function tools, which are valid and common in earlier versions, breaking existing configurations.

src/praisonai/praisonai/agents_generator.py#L445-L550

def _run_autogen_v4(self, config, topic, tools_dict):
"""
Run agents using the AutoGen v0.4 framework with async, event-driven architecture.
Args:
config (dict): Configuration dictionary
topic (str): The topic to process
tools_dict (dict): Dictionary of available tools
Returns:
str: Result of the agent interactions
"""
import asyncio
async def run_autogen_v4_async():
# Create model client for v0.4
model_config = self.config_list[0] if self.config_list else {}
model_client = OpenAIChatCompletionClient(
model=model_config.get('model', 'gpt-4o'),
api_key=model_config.get('api_key', os.environ.get("OPENAI_API_KEY")),
base_url=model_config.get('base_url', "https://api.openai.com/v1")
)
agents = []
combined_tasks = []
# Create agents from config
for role, details in config['roles'].items():
agent_name = details['role'].format(topic=topic).replace("{topic}", topic)
backstory = details['backstory'].format(topic=topic)
# Convert tools for v0.4 - simplified tool passing
agent_tools = []
for tool_name in details.get('tools', []):
if tool_name in tools_dict:
tool_instance = tools_dict[tool_name]
# For v0.4, we can pass the tool's run method directly if it's callable
if hasattr(tool_instance, 'run') and callable(tool_instance.run):
agent_tools.append(tool_instance.run)
# Create v0.4 AssistantAgent
assistant = AutoGenV4AssistantAgent(
name=agent_name,
system_message=backstory + ". Must reply with 'TERMINATE' when the task is complete.",
model_client=model_client,
tools=agent_tools,
reflect_on_tool_use=True
)
agents.append(assistant)
# Collect all task descriptions for sequential execution
for task_name, task_details in details.get('tasks', {}).items():
description_filled = task_details['description'].format(topic=topic)
combined_tasks.append(description_filled)
if not agents:
return "No agents created from configuration"
# Create termination conditions
text_termination = TextMentionTermination("TERMINATE")
max_messages_termination = MaxMessageTermination(max_messages=20)
termination_condition = text_termination | max_messages_termination
# Create RoundRobinGroupChat for parallel/sequential execution
group_chat = RoundRobinGroupChat(
agents,
termination_condition=termination_condition,
max_turns=len(agents) * 3 # Allow multiple rounds
)
# Combine all tasks into a single task description
task_description = f"Topic: {topic}\n\nTasks to complete:\n" + "\n".join(
f"{i+1}. {task}" for i, task in enumerate(combined_tasks)
)
# Run the group chat
try:
result = await group_chat.run(task=task_description)
# Extract the final message content
if result.messages:
final_message = result.messages[-1]
if hasattr(final_message, 'content'):
return f"### AutoGen v0.4 Output ###\n{final_message.content}"
else:
return f"### AutoGen v0.4 Output ###\n{str(final_message)}"
else:
return "### AutoGen v0.4 Output ###\nNo messages generated"
except Exception as e:
self.logger.error(f"Error in AutoGen v0.4 execution: {str(e)}")
return f"### AutoGen v0.4 Error ###\n{str(e)}"
finally:
# Close the model client
await model_client.close()
# Run the async function
try:
return asyncio.run(run_autogen_v4_async())
except Exception as e:
self.logger.error(f"Error running AutoGen v0.4: {str(e)}")
return f"### AutoGen v0.4 Error ###\n{str(e)}"

Fix in CursorFix in Web


Bug: AutoGen Version Selection Ignores Environment Variable

The AutoGen version selection logic fails to enforce the AUTOGEN_VERSION environment variable. If AUTOGEN_VERSION is explicitly set to 'v0.2' or 'v0.4' but the specified AutoGen version is not installed, the system silently falls back to the other available version instead of raising an ImportError. This violates the intended 'force' behavior and can lead to incompatible API usage.

src/praisonai/praisonai/agents_generator.py#L334-L349

# Choose autogen version based on availability and environment preference
# AUTOGEN_VERSION can be set to "v0.2" or "v0.4" to force a specific version
autogen_version = os.environ.get("AUTOGEN_VERSION", "auto").lower()
use_v4 = False
if autogen_version == "v0.4" and AUTOGEN_V4_AVAILABLE:
use_v4 = True
elif autogen_version == "v0.2" and AUTOGEN_AVAILABLE:
use_v4 = False
elif autogen_version == "auto":
# Default preference: use v0.4 if available, fallback to v0.2
use_v4 = AUTOGEN_V4_AVAILABLE
else:
# Fallback to whatever is available
use_v4 = AUTOGEN_V4_AVAILABLE and not AUTOGEN_AVAILABLE

Fix in CursorFix in Web


BugBot free trial expires on July 22, 2025
Learn more in the Cursor dashboard.

Was this report helpful? Give feedback by reacting with 👍 or 👎

@MervinPraison MervinPraison merged commit a4158b7 into main Jul 15, 2025
30 checks passed
@MervinPraison
Copy link
Owner Author

@claude create test cases in src/praisonai/tests folder for the above implementation in a new branch and give me pr link

@github-actions
Copy link
Contributor

github-actions bot commented Jul 16, 2025

Claude finished @MervinPraison's task —— View jobCreate PR ➔


Creating Test Cases for AutoGen v0.4 Implementation ✅

I'll create comprehensive test cases for the AutoGen v0.4 implementation in a new branch. Here's my task breakdown:

Todo List:

  • Analyze existing test structure and patterns
  • Read AutoGen implementation to understand testing requirements
  • Create new branch for test implementation
  • Create test files for AutoGen v0.4 functionality
  • Implement version detection tests
  • Implement environment variable handling tests
  • Implement async execution tests for v0.4
  • Implement backward compatibility tests
  • Implement error handling and edge case tests
  • Implement dependency availability tests
  • Commit and push changes
  • Generate PR link

✅ Test Suite Implementation Complete

5 Comprehensive Test Files Created:

  1. test_autogen_v4_integration.py (468 lines)

    • Core v0.4 async execution testing
    • Agent creation and tool integration
    • Resource management and error handling
    • Model client configuration tests
  2. test_autogen_version_selection.py (373 lines)

    • Environment variable AUTOGEN_VERSION handling
    • Version preference logic (auto prefers v0.4)
    • Explicit version selection testing
    • AgentOps tagging verification
  3. test_autogen_v4_utils.py (323 lines)

    • Agent name sanitization testing
    • Topic formatting validation
    • Tool filtering for v0.4 patterns
    • Result message extraction tests
  4. test_autogen_backward_compatibility.py (458 lines)

    • Ensures existing v0.2 code continues working
    • Same config works with both versions
    • No breaking API changes
    • Smooth migration path testing
  5. test_autogen_v4_edge_cases.py (502 lines)

    • Empty/malformed configurations
    • Unicode and extreme input handling
    • Memory-intensive operations
    • Comprehensive error scenarios

Additional Files:

  • run_autogen_v4_tests.py - Test runner with category support
  • AUTOGEN_V4_TESTS_README.md - Comprehensive documentation

Coverage Areas:
✅ Version detection and environment variables
✅ Async execution patterns with proper mocking
✅ Tool integration and agent creation
✅ Error handling and resource management
✅ Backward compatibility verification
✅ Edge cases and boundary conditions
✅ Unicode and international character support
✅ Memory and performance considerations

🚀 Ready for Review

Total Lines of Code: 2,433 lines across 7 files
Test Methods: 50+ comprehensive test methods
Mock Strategy: No external dependencies required
Documentation: Complete with README and usage examples


Branch: claude/autogen-v4-tests-20250716

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Migrate autogen from v0.2 to v0.4

2 participants