-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
跟上dev #677
跟上dev #677
Conversation
添加有关时区的设置,可以在bot_config里设置时区,来改变机器人作息,以及一些llm logger的小tweak
-增加 adapters 配置文件持久化挂载 - 修改 MaiMBot 数据挂载路径以适配 NapCat 和 NoneBot 共享 - 更新 NapCat 容器的数据卷挂载路径
vol(docker-compose): 调整数据卷挂载路径并增加配置文件持久化
This reverts commit 75cffda.
Refactor to main 0.6.0!
- 移除仅在 refactor 分支执行的 maim_message 克隆步骤 - 更新分支标签配置: - main 分支构建 main 和 main-时间戳 标签 - 新增 classical、dev 和 knowledge 分支的构建配置 - 删除 main-fix 和 refactor 分支的特殊处理逻辑
- 移除仅在 refactor 分支执行的 maim_message 克隆步骤 - 更新分支标签配置: - main 分支构建 main 和 main-时间戳 标签 - 新增 classical、dev 和 knowledge 分支的构建配置 - 删除 main-fix 和 refactor 分支的特殊处理逻辑
ci(docker): 更新 Docker镜像构建和推送配置
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
fix: Linux安装脚本适配最新分支结构
scipy relies on libstdc++.so and cannot be used directly with venv in nix environment. adding it to buildInputs solves this issue.
在 GitHub Actions工作流中添加了 Docker Hub 描述更新步骤,使用 peter-evans/dockerhub-description 动作将 README.md 文件内容作为描述发布到 Docker Hub。这有助于保持 Docker镜像页面的描述与项目 README 一致,提高文档的可维护性。
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
- 将 core 容器的镜像版本从 :latest 改为 :main- 注释掉的镜像版本也从 :latest 改为 :main
flake: add scipy to dependencies
hotfix:docker镜像改用main版本,latest没推上
exception修正与fallback
生成回复的序列图sequenceDiagram
participant ChatObserver
participant LLM
participant ReplyGenerator
ReplyGenerator->>LLM: 生成回复 (prompt)
activate LLM
LLM-->>ReplyGenerator: 内容
deactivate LLM
ReplyGenerator->>ChatObserver: check()
ChatObserver-->>ReplyGenerator: is_new
alt is_new
ReplyGenerator->>ReplyGenerator: 重新生成回复
else not is_new
ReplyGenerator->>ReplyGenerator: 返回内容
end
Conversation 的更新类图classDiagram
class Conversation {
- _instances: Dict[str, Conversation]
- _instance_lock: asyncio.Lock
- _init_events: Dict[str, asyncio.Event]
- _initializing: Dict[str, bool]
- stream_id: str
- should_continue: bool
- state: ConversationState
- chat_observer: ChatObserver
- reply_generator: ReplyGenerator
+ get_instance(stream_id: str) : Optional[Conversation]
+ remove_instance(stream_id: str)
+ __init__(stream_id: str)
+ start()
+ _conversation_loop()
+ _handle_action(action: str, reason: str)
+ _stop_conversation()
+ _send_timeout_message()
+ _convert_to_message(msg: Dict[str, Any]) : MessageRecv
+ _clear_knowledge_cache()
}
note for Conversation "添加了异步锁定和初始化事件,以提高线程安全性"
ChatObserver 的更新类图classDiagram
class ChatObserver {
- stream_id: str
- message_history: deque
- last_message_time: Optional[float]
- _update_event: asyncio.Event
- _update_complete: asyncio.Event
+ check() : bool
+ get_new_message() : Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]
+ new_message_after(time_point: float) : bool
+ _add_message_to_history(message: Dict[str, Any])
}
note for ChatObserver "添加了 check() 和 get_new_message() 方法,以实现高效的消息检测"
BotConfig 的更新类图classDiagram
class BotConfig {
+ TIME_ZONE: str
}
note for BotConfig "添加了 TIME_ZONE 配置选项,用于时区自定义"
文件级别变更
提示和命令与 Sourcery 互动
自定义您的体验访问您的 仪表板 以:
获取帮助Original review guide in EnglishReviewer's Guide by SourceryThis pull request includes several key updates and refactors. It improves JSON parsing, adds support for GIF表情包, enhances message sending with REST and WebSocket protocols, and introduces asynchronous locking for conversation instances. Additionally, it includes a new Sequence diagram for analyzing goalsequenceDiagram
participant LLM
participant GoalAnalyzer
GoalAnalyzer->>LLM: Generate goal and reasoning (prompt)
activate LLM
LLM-->>GoalAnalyzer: JSON content (goal, reasoning)
deactivate LLM
GoalAnalyzer->>GoalAnalyzer: get_items_from_json(content, "goal", "reasoning")
alt JSON parsing failed
GoalAnalyzer->>GoalAnalyzer: Retry
else JSON parsing successful
GoalAnalyzer->>GoalAnalyzer: Validate goal and reasoning types
alt Types invalid
GoalAnalyzer->>GoalAnalyzer: Retry
else Types valid
GoalAnalyzer->>GoalAnalyzer: Return goal, method, reasoning
end
end
Sequence diagram for generating replysequenceDiagram
participant ChatObserver
participant LLM
participant ReplyGenerator
ReplyGenerator->>LLM: Generate reply (prompt)
activate LLM
LLM-->>ReplyGenerator: Content
deactivate LLM
ReplyGenerator->>ChatObserver: check()
ChatObserver-->>ReplyGenerator: is_new
alt is_new
ReplyGenerator->>ReplyGenerator: Regenerate reply
else not is_new
ReplyGenerator->>ReplyGenerator: Return content
end
Updated class diagram for ConversationclassDiagram
class Conversation {
- _instances: Dict[str, Conversation]
- _instance_lock: asyncio.Lock
- _init_events: Dict[str, asyncio.Event]
- _initializing: Dict[str, bool]
- stream_id: str
- should_continue: bool
- state: ConversationState
- chat_observer: ChatObserver
- reply_generator: ReplyGenerator
+ get_instance(stream_id: str) : Optional[Conversation]
+ remove_instance(stream_id: str)
+ __init__(stream_id: str)
+ start()
+ _conversation_loop()
+ _handle_action(action: str, reason: str)
+ _stop_conversation()
+ _send_timeout_message()
+ _convert_to_message(msg: Dict[str, Any]) : MessageRecv
+ _clear_knowledge_cache()
}
note for Conversation "Added asynchronous locking and initialization events for thread safety"
Updated class diagram for ChatObserverclassDiagram
class ChatObserver {
- stream_id: str
- message_history: deque
- last_message_time: Optional[float]
- _update_event: asyncio.Event
- _update_complete: asyncio.Event
+ check() : bool
+ get_new_message() : Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]
+ new_message_after(time_point: float) : bool
+ _add_message_to_history(message: Dict[str, Any])
}
note for ChatObserver "Added check() and get_new_message() methods for efficient message detection"
Updated class diagram for BotConfigclassDiagram
class BotConfig {
+ TIME_ZONE: str
}
note for BotConfig "Added TIME_ZONE config option for timezone customization"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
3db20e7
into
MaiM-with-u:new_knowledge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @UnCLAS-Prommer - I've reviewed your changes - here's some feedback:
Overall Comments:
- The introduction of
get_items_from_json
looks like a good way to reduce duplicated code, but it might be worth adding some unit tests for it. - The changes to the Dockerfile and run.sh look good, but make sure to test them thoroughly to ensure they work as expected.
Here's what I looked at during the review
- 🟡 General issues: 2 issues found
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
# 使用全局锁来确保线程安全 | ||
async with cls._instance_lock: | ||
# 如果已经在初始化中,等待初始化完成 | ||
if stream_id in cls._initializing and cls._initializing[stream_id]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Avoid releasing and re-acquiring the lock manually within an async with block.
Instead of calling 'cls._instance_lock.release()' and then later 'await cls._instance_lock.acquire()', consider using an asyncio.Condition or another synchronization primitive to safely wait for the initialization to complete.
Suggested implementation:
# 如果已经在初始化中,等待初始化完成
if stream_id in cls._initializing and cls._initializing[stream_id]:
try:
await asyncio.wait_for(
cls._init_condition.wait_for(lambda: not cls._initializing.get(stream_id, False)),
timeout=5.0
)
except asyncio.TimeoutError:
logger.error(f"等待实例 {stream_id} 初始化超时")
return None
Ensure that a condition variable is available. For example, in your class initializer you can add:
cls._init_condition = asyncio.Condition(cls._instance_lock)
If not already present, please add this declaration where class attributes are set up.
self.generated_reply # 将不合适的回复作为previous_reply传入 | ||
) | ||
|
||
while self.chat_observer.check(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (performance): Potential busy-wait loop detected.
Without any delay inside this loop, it may cause high CPU usage if new messages persistently trigger check() to return true. Adding a small sleep interval inside the loop could mitigate that risk.
Suggested implementation:
import asyncio
# (Existing imports)
while self.chat_observer.check():
if not is_suitable:
logger.warning(f"生成的回复不合适,原因: {reason}")
await asyncio.sleep(0.1)
cls._instances[stream_id] = cls(stream_id) | ||
logger.info(f"创建新的对话实例: {stream_id}") | ||
return cls._instances[stream_id] | ||
async def get_instance(cls, stream_id: str) -> Optional['Conversation']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider refactoring the async instance management and reply-generation loops to use more idiomatic async patterns, such as splitting work into separate lock blocks and using a retry loop with a maximum attempt count for reply generation to simplify control flow and reduce manual lock manipulation .
Consider refactoring your async instance management and reply-generation loops to use more idiomatic async patterns. For example, in your updated `get_instance` method you manually release and reacquire the lock. Instead, split the work into separate lock blocks so you never call release inside an “async with” block. For instance:
```python
@classmethod
async def get_instance(cls, stream_id: str) -> Optional["Conversation"]:
async with cls._instance_lock:
if stream_id in cls._instances:
return cls._instances[stream_id]
if cls._initializing.get(stream_id):
event = cls._init_events[stream_id]
else:
# Start initialization if not already in progress
cls._initializing[stream_id] = True
cls._instances[stream_id] = cls(stream_id)
cls._init_events[stream_id] = asyncio.Event()
logger.info(f"创建新的对话实例: {stream_id}")
return cls._instances[stream_id]
try:
await asyncio.wait_for(event.wait(), timeout=5.0)
except asyncio.TimeoutError:
logger.error(f"等待实例 {stream_id} 初始化超时")
return None
async with cls._instance_lock:
return cls._instances.get(stream_id)
Similarly, in your reply generation (_handle_action) logic, you could simplify the loop that repeatedly checks reply suitability. Instead of a nested while-loop that repeatedly regenerates the reply, consider a retry loop with a maximum attempt count that calls a helper function for reply generation. For example:
async def generate_suitable_reply(self, goal, chat_history, knowledge_cache, previous_reply=None, max_retries=3):
for attempt in range(max_retries):
reply = await self.reply_generator.generate(
self.current_goal, self.current_method,
[self._convert_to_message(msg) for msg in chat_history],
knowledge_cache,
previous_reply
)
is_suitable, reason, need_replan = await self.reply_generator.check_reply(reply, self.current_goal)
if is_suitable:
return reply, False
if need_replan:
self.state = ConversationState.RETHINKING
self.current_goal, self.current_method, self.goal_reasoning = await self.goal_analyzer.analyze_goal()
return None, True
previous_reply = reply # pass current reply as previous_reply for next attempt
return reply, False # fallback reply after exhausting retries
These changes reduce manual lock manipulation and simplify control flow in the retry logic while preserving functionality.
请填写以下内容
(删除掉中括号内的空格,并替换为小写的x)
main
分支 禁止修改,请确认本次提交的分支 不是main
分支其他信息
好的,这是将拉取请求总结翻译成中文的结果:
Sourcery 总结
更新项目以提高代码质量、修复错误并增强 MaiCore 系统的多个组件的功能
Bug 修复:
功能增强:
文档:
日常维护:
Original summary in English
Summary by Sourcery
Update the project to improve code quality, fix bugs, and enhance functionality across multiple components of the MaiCore system
Bug Fixes:
Enhancements:
Documentation:
Chores: