Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

跟上dev #677

Merged
Merged

Conversation

UnCLAS-Prommer
Copy link
Collaborator

@UnCLAS-Prommer UnCLAS-Prommer commented Apr 5, 2025

  • ✅ 接受:与main直接相关的Bug修复:提交到dev分支
  • 新增功能类pr需要经过issue提前讨论,否则不会被合并

请填写以下内容

(删除掉中括号内的空格,并替换为小写的x

    • main 分支 禁止修改,请确认本次提交的分支 不是 main 分支
    • 我确认我阅读了贡献指南
    • 本次更新类型为:BUG修复
    • 本次更新类型为:功能新增
    • 本次更新是否经过测试
  1. 请填写破坏性更新的具体内容(如有):
  2. 请简要说明本次更新的内容和目的:

其他信息

  • 关联 Issue:Close #
  • 截图/GIF
  • 附加信息:

好的,这是将拉取请求总结翻译成中文的结果:

Sourcery 总结

更新项目以提高代码质量、修复错误并增强 MaiCore 系统的多个组件的功能

Bug 修复:

  • 修复了 PFC (个性化流程对话) 回复生成的问题
  • 改进了各种系统组件中的错误处理
  • 修复了 emoji 处理和打字时间计算

功能增强:

  • 重构了 JSON 解析实用程序函数
  • 改进了日志记录和错误报告
  • 增强了时区处理
  • 更新了消息发送和处理逻辑

文档:

  • 添加了包含贡献指南的 CONTRIBUTE.md
  • 使用分支信息更新了 README.md

日常维护:

  • 更新了版本处理
  • 重新组织了配置和部署文件
  • 更新了 GitHub 工作流程和拉取请求模板
Original summary in English

Summary by Sourcery

Update the project to improve code quality, fix bugs, and enhance functionality across multiple components of the MaiCore system

Bug Fixes:

  • Fixed issues with PFC (Personalized Flow Conversation) reply generation
  • Improved error handling in various system components
  • Fixed emoji processing and typing time calculation

Enhancements:

  • Refactored JSON parsing utility functions
  • Improved logging and error reporting
  • Enhanced time zone handling
  • Updated message sending and processing logic

Documentation:

  • Added CONTRIBUTE.md with contribution guidelines
  • Updated README.md with branch information

Chores:

  • Updated version handling
  • Reorganized configuration and deployment files
  • Updated GitHub workflows and pull request templates

lmst2 and others added 30 commits March 31, 2025 21:10
添加有关时区的设置,可以在bot_config里设置时区,来改变机器人作息,以及一些llm logger的小tweak
-增加 adapters 配置文件持久化挂载
- 修改 MaiMBot 数据挂载路径以适配 NapCat 和 NoneBot 共享
- 更新 NapCat 容器的数据卷挂载路径
vol(docker-compose): 调整数据卷挂载路径并增加配置文件持久化
This reverts commit 75cffda.
- 移除仅在 refactor 分支执行的 maim_message 克隆步骤
- 更新分支标签配置:  - main 分支构建 main 和 main-时间戳 标签  - 新增 classical、dev 和 knowledge 分支的构建配置
- 删除 main-fix 和 refactor 分支的特殊处理逻辑
- 移除仅在 refactor 分支执行的 maim_message 克隆步骤
- 更新分支标签配置:  - main 分支构建 main 和 main-时间戳 标签  - 新增 classical、dev 和 knowledge 分支的构建配置
- 删除 main-fix 和 refactor 分支的特殊处理逻辑
ci(docker): 更新 Docker镜像构建和推送配置
Cookie987 and others added 27 commits April 4, 2025 22:01
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
fix: Linux安装脚本适配最新分支结构
scipy relies on libstdc++.so and cannot be used directly with venv in nix environment. adding it to buildInputs solves this issue.
在 GitHub Actions工作流中添加了 Docker Hub 描述更新步骤,使用 peter-evans/dockerhub-description 动作将 README.md 文件内容作为描述发布到 Docker Hub。这有助于保持 Docker镜像页面的描述与项目 README 一致,提高文档的可维护性。
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
hotfix:docker镜像改用main版本,latest没推上
Copy link
Contributor

sourcery-ai bot commented Apr 5, 2025

## Sourcery 评审指南

此拉取请求包含几个关键的更新和重构。它改进了 JSON 解析,增加了对 GIF 表情包的支持,通过 REST 和 WebSocket 协议增强了消息发送功能,并为会话实例引入了异步锁定。此外,它还在 `ChatObserver` 中添加了一个新的 `check` 方法,用于高效的消息检测,以及一个 `TIME_ZONE` 配置选项,用于时区自定义。

#### 分析目标的序列图

```mermaid
sequenceDiagram
    participant LLM
    participant GoalAnalyzer

    GoalAnalyzer->>LLM: 生成目标和推理 (prompt)
    activate LLM
    LLM-->>GoalAnalyzer: JSON 内容 (目标, 推理)
    deactivate LLM
    GoalAnalyzer->>GoalAnalyzer: get_items_from_json(content, "goal", "reasoning")
    alt JSON 解析失败
        GoalAnalyzer->>GoalAnalyzer: 重试
    else JSON 解析成功
        GoalAnalyzer->>GoalAnalyzer: 验证目标和推理类型
        alt 类型无效
            GoalAnalyzer->>GoalAnalyzer: 重试
        else 类型有效
            GoalAnalyzer->>GoalAnalyzer: 返回目标, 方法, 推理
        end
    end

生成回复的序列图

sequenceDiagram
    participant ChatObserver
    participant LLM
    participant ReplyGenerator

    ReplyGenerator->>LLM: 生成回复 (prompt)
    activate LLM
    LLM-->>ReplyGenerator: 内容
    deactivate LLM
    ReplyGenerator->>ChatObserver: check()
    ChatObserver-->>ReplyGenerator: is_new
    alt is_new
        ReplyGenerator->>ReplyGenerator: 重新生成回复
    else not is_new
        ReplyGenerator->>ReplyGenerator: 返回内容
    end
Loading

Conversation 的更新类图

classDiagram
    class Conversation {
        - _instances: Dict[str, Conversation]
        - _instance_lock: asyncio.Lock
        - _init_events: Dict[str, asyncio.Event]
        - _initializing: Dict[str, bool]
        - stream_id: str
        - should_continue: bool
        - state: ConversationState
        - chat_observer: ChatObserver
        - reply_generator: ReplyGenerator
        + get_instance(stream_id: str) : Optional[Conversation]
        + remove_instance(stream_id: str)
        + __init__(stream_id: str)
        + start()
        + _conversation_loop()
        + _handle_action(action: str, reason: str)
        + _stop_conversation()
        + _send_timeout_message()
        + _convert_to_message(msg: Dict[str, Any]) : MessageRecv
        + _clear_knowledge_cache()
    }
    note for Conversation "添加了异步锁定和初始化事件,以提高线程安全性"
Loading

ChatObserver 的更新类图

classDiagram
    class ChatObserver {
        - stream_id: str
        - message_history: deque
        - last_message_time: Optional[float]
        - _update_event: asyncio.Event
        - _update_complete: asyncio.Event
        + check() : bool
        + get_new_message() : Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]
        + new_message_after(time_point: float) : bool
        + _add_message_to_history(message: Dict[str, Any])
    }
    note for ChatObserver "添加了 check() 和 get_new_message() 方法,以实现高效的消息检测"
Loading

BotConfig 的更新类图

classDiagram
    class BotConfig {
        + TIME_ZONE: str
    }
    note for BotConfig "添加了 TIME_ZONE 配置选项,用于时区自定义"
Loading

文件级别变更

变更 详情 文件
重构了 plananalyze_goal 函数中的 JSON 解析逻辑,使用简化的 get_items_from_json 函数从 LLM 响应中提取 action 和 reason。
  • plan 函数中,用 get_items_from_json 替换了复杂的 JSON 解析和提取逻辑。
  • analyze_goal 函数中,用 get_items_from_json 替换了复杂的 JSON 解析和提取逻辑。
  • get_items_from_json 调用中添加了默认值和必需的类型验证。
  • 删除了与 JSON 解析相关的冗余错误处理和重试。
src/plugins/PFC/pfc.py
src/plugins/PFC/pfc_utils.py
ChatObserver 中实现了一个新的 check 方法,以高效地确定自上次观察以来是否收到了新消息。
  • ChatObserver 添加了一个 check 方法,该方法查询数据库以查找新消息。
  • 检测到新消息时,更新 last_check_time
  • 消除了仅在检查新消息时检索完整消息历史记录的需要。
src/plugins/PFC/chat_observer.py
添加了逻辑,以便在回复生成过程中收到新消息时重新生成回复,从而确保机器人考虑最新的上下文。
  • generate 函数中使用 chat_observer.check() 添加了对新消息的检查。
  • 如果检测到新消息,则使用最新的聊天记录重新生成回复。
  • 确保机器人在生成回复时考虑最新的上下文。
src/plugins/PFC/pfc.py
引入了异步锁定和初始化事件,以管理 Conversation 实例的并发访问和初始化。
  • _instance_lock_init_events_initializing 添加到 Conversation 类,以实现线程安全的实例管理。
  • 修改 get_instance 以使用全局锁和异步事件来处理并发初始化请求。
  • remove_instance 实现为具有锁定的异步方法,以安全地删除会话实例。
  • start 中设置初始化事件以发出完成信号并释放等待的任务。
src/plugins/PFC/pfc.py
增加了对图像描述功能中 GIF 表情包的支持。
  • 添加了一个 transform_gif 方法,用于将 GIF 图像转换为水平的帧条。
  • 修改 get_emoji_description 以在处理 GIF 图像时使用 transform_gif
  • 更新了 GIF 图像的提示,以描述动画传达的内容和情感。
src/plugins/chat/utils_image.py
重构了表情符号管理系统,以改进表情符号的注册、检索和发送。
  • 将定期检查和注册任务合并为一个 start_periodic_check_register 方法。
  • 添加了一个检查以确保在生成回复之前有新消息。
  • 修复了表情符号输入时间,以防止表情符号卡住。
src/plugins/chat/emoji_manager.py
src/plugins/chat/message_sender.py
更新了消息发送机制以使用 REST 和 WebSocket 协议。
  • 添加了一个 send_via_ws 方法,用于通过 WebSocket 发送消息。
  • 修改 send_message 以尝试通过 REST 发送,并在失败时回退到 WebSocket。
  • 添加了 REST 和 WebSocket 消息发送的错误处理。
src/plugins/chat/message_sender.py
src/plugins/message/api.py
添加了一个新的 MessageServer 类,该类扩展了 BaseMessageHandler 以处理 WebSocket 和 REST API 消息处理。
  • 添加了一个具有 WebSocket 和 REST API 端点的 MessageServer 类。
  • 实现了 WebSocket 连接的令牌验证和管理。
  • 添加了将消息广播到所有客户端或特定平台的方法。
  • 注册了类级别和实例级别的消息处理程序。
src/plugins/message/api.py
ChatObserver 添加了一个新的 get_new_message 方法,以检索自上次观察以来的新消息。
  • ChatObserver 添加了一个 get_new_message 方法,该方法检索自上次观察以来的新消息。
  • 将新消息插入到历史记录中,并返回新消息和历史记录。
src/plugins/PFC/chat_observer.py
添加了一个新的 TIME_ZONE 配置选项,允许用户设置机器人的时区。
  • BotConfig 添加了一个 TIME_ZONE 配置选项。
  • 更新了计划生成器以使用配置的时区。
  • 验证配置文件中的时区。
src/plugins/config/config.py
src/plugins/schedule/schedule_generator.py

提示和命令

与 Sourcery 互动

  • 触发新的审查: 在拉取请求上评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审查评论。
  • 从审查评论生成 GitHub issue: 通过回复审查评论,要求 Sourcery 从审查评论创建一个 issue。您也可以回复审查评论并使用 @sourcery-ai issue 从该评论创建一个 issue。
  • 生成拉取请求标题: 在拉取请求标题中的任何位置写入 @sourcery-ai,以便随时生成标题。您也可以在拉取请求上评论 @sourcery-ai title 以随时(重新)生成标题。
  • 生成拉取请求摘要: 在拉取请求正文中的任何位置写入 @sourcery-ai summary,以便随时在您想要的位置生成 PR 摘要。您也可以在拉取请求上评论 @sourcery-ai summary 以随时(重新)生成摘要。
  • 生成评审指南: 在拉取请求上评论 @sourcery-ai guide 以随时(重新)生成评审指南。
  • 解决所有 Sourcery 评论: 在拉取请求上评论 @sourcery-ai resolve 以解决所有 Sourcery 评论。如果您已经解决了所有评论并且不想再看到它们,这将非常有用。
  • 驳回所有 Sourcery 审查: 在拉取请求上评论 @sourcery-ai dismiss 以驳回所有现有的 Sourcery 审查。如果您想重新开始新的审查,这将特别有用 - 不要忘记评论 @sourcery-ai review 以触发新的审查!
  • 为 issue 生成行动计划: 在 issue 上评论 @sourcery-ai plan 以生成行动计划。

自定义您的体验

访问您的 仪表板 以:

  • 启用或禁用审查功能,例如 Sourcery 生成的拉取请求摘要、评审指南等。
  • 更改审查语言。
  • 添加、删除或编辑自定义审查说明。
  • 调整其他审查设置。

获取帮助

```
Original review guide in English

Reviewer's Guide by Sourcery

This pull request includes several key updates and refactors. It improves JSON parsing, adds support for GIF表情包, enhances message sending with REST and WebSocket protocols, and introduces asynchronous locking for conversation instances. Additionally, it includes a new check method in ChatObserver for efficient message detection and a TIME_ZONE config option for timezone customization.

Sequence diagram for analyzing goal

sequenceDiagram
    participant LLM
    participant GoalAnalyzer

    GoalAnalyzer->>LLM: Generate goal and reasoning (prompt)
    activate LLM
    LLM-->>GoalAnalyzer: JSON content (goal, reasoning)
    deactivate LLM
    GoalAnalyzer->>GoalAnalyzer: get_items_from_json(content, "goal", "reasoning")
    alt JSON parsing failed
        GoalAnalyzer->>GoalAnalyzer: Retry
    else JSON parsing successful
        GoalAnalyzer->>GoalAnalyzer: Validate goal and reasoning types
        alt Types invalid
            GoalAnalyzer->>GoalAnalyzer: Retry
        else Types valid
            GoalAnalyzer->>GoalAnalyzer: Return goal, method, reasoning
        end
    end
Loading

Sequence diagram for generating reply

sequenceDiagram
    participant ChatObserver
    participant LLM
    participant ReplyGenerator

    ReplyGenerator->>LLM: Generate reply (prompt)
    activate LLM
    LLM-->>ReplyGenerator: Content
    deactivate LLM
    ReplyGenerator->>ChatObserver: check()
    ChatObserver-->>ReplyGenerator: is_new
    alt is_new
        ReplyGenerator->>ReplyGenerator: Regenerate reply
    else not is_new
        ReplyGenerator->>ReplyGenerator: Return content
    end
Loading

Updated class diagram for Conversation

classDiagram
    class Conversation {
        - _instances: Dict[str, Conversation]
        - _instance_lock: asyncio.Lock
        - _init_events: Dict[str, asyncio.Event]
        - _initializing: Dict[str, bool]
        - stream_id: str
        - should_continue: bool
        - state: ConversationState
        - chat_observer: ChatObserver
        - reply_generator: ReplyGenerator
        + get_instance(stream_id: str) : Optional[Conversation]
        + remove_instance(stream_id: str)
        + __init__(stream_id: str)
        + start()
        + _conversation_loop()
        + _handle_action(action: str, reason: str)
        + _stop_conversation()
        + _send_timeout_message()
        + _convert_to_message(msg: Dict[str, Any]) : MessageRecv
        + _clear_knowledge_cache()
    }
    note for Conversation "Added asynchronous locking and initialization events for thread safety"
Loading

Updated class diagram for ChatObserver

classDiagram
    class ChatObserver {
        - stream_id: str
        - message_history: deque
        - last_message_time: Optional[float]
        - _update_event: asyncio.Event
        - _update_complete: asyncio.Event
        + check() : bool
        + get_new_message() : Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]
        + new_message_after(time_point: float) : bool
        + _add_message_to_history(message: Dict[str, Any])
    }
    note for ChatObserver "Added check() and get_new_message() methods for efficient message detection"
Loading

Updated class diagram for BotConfig

classDiagram
    class BotConfig {
        + TIME_ZONE: str
    }
    note for BotConfig "Added TIME_ZONE config option for timezone customization"
Loading

File-Level Changes

Change Details Files
Refactors JSON parsing logic in plan and analyze_goal functions to use a simplified get_items_from_json function for extracting action and reason from LLM responses.
  • Replaces complex JSON parsing and extraction logic with get_items_from_json in plan function.
  • Replaces complex JSON parsing and extraction logic with get_items_from_json in analyze_goal function.
  • Adds default values and required types validation in get_items_from_json calls.
  • Removes redundant error handling and retries related to JSON parsing.
src/plugins/PFC/pfc.py
src/plugins/PFC/pfc_utils.py
Implements a new check method in ChatObserver to efficiently determine if new messages have arrived since the last observation.
  • Adds a check method to ChatObserver that queries the database for new messages.
  • Updates last_check_time when new messages are detected.
  • Removes the need to retrieve full message history when only checking for new messages.
src/plugins/PFC/chat_observer.py
Adds logic to re-generate replies when new messages arrive during the reply generation process, ensuring the bot considers the latest context.
  • Adds a check for new messages using chat_observer.check() in the generate function.
  • Re-generates the reply if new messages are detected, using the latest chat history.
  • Ensures the bot considers the most up-to-date context when generating replies.
src/plugins/PFC/pfc.py
Introduces asynchronous locking and initialization events to manage concurrent access and initialization of Conversation instances.
  • Adds _instance_lock, _init_events, and _initializing to the Conversation class for thread-safe instance management.
  • Modifies get_instance to use a global lock and asynchronous events to handle concurrent initialization requests.
  • Implements remove_instance as an asynchronous method with locking to safely remove conversation instances.
  • Sets the initialization event in start to signal completion and release waiting tasks.
src/plugins/PFC/pfc.py
Adds support for GIF表情包 in the image description functionality.
  • Adds a transform_gif method to convert GIF images to a horizontal strip of frames.
  • Modifies get_emoji_description to use transform_gif when processing GIF images.
  • Updates the prompt for GIF images to describe the content and emotions conveyed by the animation.
src/plugins/chat/utils_image.py
Refactors the emoji management system to improve the registration, retrieval, and sending of emoji.
  • Combines the periodic check and register tasks into a single start_periodic_check_register method.
  • Adds a check to ensure there are new messages before generating a reply.
  • Fixes the emoji typing time to prevent emoji from getting stuck.
src/plugins/chat/emoji_manager.py
src/plugins/chat/message_sender.py
Updates the message sending mechanism to use REST and WebSocket protocols.
  • Adds a send_via_ws method to send messages via WebSocket.
  • Modifies send_message to attempt sending via REST and fall back to WebSocket on failure.
  • Adds error handling for REST and WebSocket message sending.
src/plugins/chat/message_sender.py
src/plugins/message/api.py
Adds a new MessageServer class that extends BaseMessageHandler to handle WebSocket and REST API message processing.
  • Adds a MessageServer class with WebSocket and REST API endpoints.
  • Implements token verification and management for WebSocket connections.
  • Adds methods for broadcasting messages to all clients or specific platforms.
  • Registers class-level and instance-level message handlers.
src/plugins/message/api.py
Adds a new get_new_message method to ChatObserver to retrieve new messages since the last observation.
  • Adds a get_new_message method to ChatObserver that retrieves new messages since the last observation.
  • Inserts the new messages into the history and returns both the new messages and the history.
src/plugins/PFC/chat_observer.py
Adds a new TIME_ZONE config option to allow users to set the timezone for the bot.
  • Adds a TIME_ZONE config option to BotConfig.
  • Updates the schedule generator to use the configured timezone.
  • Validates the timezone in the config file.
src/plugins/config/config.py
src/plugins/schedule/schedule_generator.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@UnCLAS-Prommer UnCLAS-Prommer merged commit 3db20e7 into MaiM-with-u:new_knowledge Apr 5, 2025
2 of 3 checks passed
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @UnCLAS-Prommer - I've reviewed your changes - here's some feedback:

Overall Comments:

  • The introduction of get_items_from_json looks like a good way to reduce duplicated code, but it might be worth adding some unit tests for it.
  • The changes to the Dockerfile and run.sh look good, but make sure to test them thoroughly to ensure they work as expected.
Here's what I looked at during the review
  • 🟡 General issues: 2 issues found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟡 Complexity: 1 issue found
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

# 使用全局锁来确保线程安全
async with cls._instance_lock:
# 如果已经在初始化中,等待初始化完成
if stream_id in cls._initializing and cls._initializing[stream_id]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Avoid releasing and re-acquiring the lock manually within an async with block.

Instead of calling 'cls._instance_lock.release()' and then later 'await cls._instance_lock.acquire()', consider using an asyncio.Condition or another synchronization primitive to safely wait for the initialization to complete.

Suggested implementation:

                # 如果已经在初始化中,等待初始化完成
                if stream_id in cls._initializing and cls._initializing[stream_id]:
                    try:
                        await asyncio.wait_for(
                            cls._init_condition.wait_for(lambda: not cls._initializing.get(stream_id, False)),
                            timeout=5.0
                        )
                    except asyncio.TimeoutError:
                        logger.error(f"等待实例 {stream_id} 初始化超时")
                        return None

Ensure that a condition variable is available. For example, in your class initializer you can add:
cls._init_condition = asyncio.Condition(cls._instance_lock)
If not already present, please add this declaration where class attributes are set up.

self.generated_reply # 将不合适的回复作为previous_reply传入
)

while self.chat_observer.check():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Potential busy-wait loop detected.

Without any delay inside this loop, it may cause high CPU usage if new messages persistently trigger check() to return true. Adding a small sleep interval inside the loop could mitigate that risk.

Suggested implementation:

import asyncio
# (Existing imports)
            while self.chat_observer.check():
                if not is_suitable:
                    logger.warning(f"生成的回复不合适,原因: {reason}")
                await asyncio.sleep(0.1)

cls._instances[stream_id] = cls(stream_id)
logger.info(f"创建新的对话实例: {stream_id}")
return cls._instances[stream_id]
async def get_instance(cls, stream_id: str) -> Optional['Conversation']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider refactoring the async instance management and reply-generation loops to use more idiomatic async patterns, such as splitting work into separate lock blocks and using a retry loop with a maximum attempt count for reply generation to simplify control flow and reduce manual lock manipulation .

Consider refactoring your async instance management and reply-generation loops to use more idiomatic async patterns. For example, in your updated `get_instance` method you manually release and reacquire the lock. Instead, split the work into separate lock blocks so you never call release inside an “async with” block. For instance:

```python
@classmethod
async def get_instance(cls, stream_id: str) -> Optional["Conversation"]:
    async with cls._instance_lock:
        if stream_id in cls._instances:
            return cls._instances[stream_id]
        if cls._initializing.get(stream_id):
            event = cls._init_events[stream_id]
        else:
            # Start initialization if not already in progress
            cls._initializing[stream_id] = True
            cls._instances[stream_id] = cls(stream_id)
            cls._init_events[stream_id] = asyncio.Event()
            logger.info(f"创建新的对话实例: {stream_id}")
            return cls._instances[stream_id]
    try:
        await asyncio.wait_for(event.wait(), timeout=5.0)
    except asyncio.TimeoutError:
        logger.error(f"等待实例 {stream_id} 初始化超时")
        return None

    async with cls._instance_lock:
        return cls._instances.get(stream_id)

Similarly, in your reply generation (_handle_action) logic, you could simplify the loop that repeatedly checks reply suitability. Instead of a nested while-loop that repeatedly regenerates the reply, consider a retry loop with a maximum attempt count that calls a helper function for reply generation. For example:

async def generate_suitable_reply(self, goal, chat_history, knowledge_cache, previous_reply=None, max_retries=3):
    for attempt in range(max_retries):
        reply = await self.reply_generator.generate(
            self.current_goal, self.current_method,
            [self._convert_to_message(msg) for msg in chat_history],
            knowledge_cache,
            previous_reply
        )
        is_suitable, reason, need_replan = await self.reply_generator.check_reply(reply, self.current_goal)
        if is_suitable:
            return reply, False
        if need_replan:
            self.state = ConversationState.RETHINKING
            self.current_goal, self.current_method, self.goal_reasoning = await self.goal_analyzer.analyze_goal()
            return None, True
        previous_reply = reply  # pass current reply as previous_reply for next attempt
    return reply, False  # fallback reply after exhausting retries

These changes reduce manual lock manipulation and simplify control flow in the retry logic while preserving functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants