Release Next-2.0.6: Stability & Reliability Fixes#66
Conversation
Batch learning processed messages but never wrote them to the filtered_messages table. The WebUI filtered count (COUNT(*) on that table) therefore always stayed at 0. Now persist each filtered message after the filtering step.
When no LLM call stats exist the fallback avgResponseTime was 2000ms, yielding a response speed score of 0. Use a neutral 50 instead so the radar chart does not mislead users into thinking performance is bad.
When multiple provider configs are active simultaneously, get_default_persona_for_web() used next(iter(dict.values())) to pick an arbitrary UMO from the group_id_to_unified_origin mapping, causing the displayed persona to change on every WebUI refresh. Pass None to get_default_persona_v3() to always return the stable global default persona. Per-group persona lookup remains available via get_persona_for_group(group_id).
…persistence - Fix PsychologicalStateComponent attribute error by mapping to correct ORM columns (category/state_type instead of non-existent component_name) - Fix PsychologicalComponentRepository to use composite_state_id FK instead of mismatched state_id string column - Fix PsychologicalHistoryRepository.add_history to use correct ORM field names (old_state_type, new_state_type, change_reason, etc.) - Fix BotMoodRepository refresh failure after async commit by wrapping session.refresh in try/except with debug logging - Fix NoneType len() error in FrameworkLLMAdapter by normalizing contexts parameter to empty list before provider calls - Fix WebUI jargon dropdown showing undefined by adding group_id and group_name fields to get_jargon_groups API response
Add unit tests for config, constants, exceptions, interfaces, patterns, cache_manager, json_utils, security_utils, guardrails_models, learning_quality_monitor, and tiered_learning_trigger. Update pytest.ini to include core, utils, and services in coverage reporting.
…ycle - Add ssl=False to aiomysql.connect() and engine connect_args to prevent struct.unpack failures caused by MySQL 8 default TLS handshake - Wrap MySQL connection with asyncio.wait_for (15s timeout) for clearer error reporting on connection failures - Remove redundant session.close() in BaseFacade.get_session() and DomainRouter.get_session() to avoid double-close issues under StaticPool, which caused "Cannot operate on a closed database" errors - Add engine liveness guard in BaseFacade before yielding sessions
LLM inference occasionally returns the meaning field as a dict or list instead of a plain string, causing sqlite3.ProgrammingError on UPDATE. - JargonFacade.update_jargon: convert dict/list meaning to JSON string - JargonInferenceEngine.infer_meaning: normalize meaning type from both inference passes before returning results
Prevents AttributeError when ORM lazy-load fails or schema diverges from the expected PsychologicalStateComponent fields. Each component conversion is individually guarded with try/except and diagnostic logging.
- Guard all blueprint routes against None JSON body to prevent NoneType.get AttributeError on revert/batch operations - Add persona_learning_ and style_ ID prefixes to reviewed list entries to match the pending list format used by the frontend - Filter out records with None IDs from the reviewed list - Wrap each data source fetch in independent try/except so a single source failure does not suppress the other sources
Prioritize DomainRouter.get_style_learning_statistics() facade method instead of directly instantiating StyleLearningReviewRepository, which avoids session lifecycle mismatch under concurrent dashboard requests.
…t access errors StaticPool shares a single physical connection across all async sessions, causing transaction state corruption and "Cannot operate on a closed database" errors when WebUI dashboard issues concurrent API requests. NullPool creates an independent connection per session; combined with WAL mode already in place, this allows safe concurrent reads while serializing writes. SQLite connection creation cost is negligible (file handle open), so there is no meaningful performance regression.
Add null checks for _command_handlers in all 6 command wrappers in main.py to prevent NoneType attribute errors when bootstrap() fails. Add service availability checks in affection_status (affection_manager) and set_mood (temporary_persona_updater, affection_manager) handlers to return user-friendly messages instead of crashing.
…gin unload Explicit gc.collect() in both Server.stop() and WebUIManager.stop() forces Python to traverse the entire object graph of the unloading plugin (~200 modules, circular ORM references). Each call takes 80+ seconds of 100% CPU. Python's cyclic GC handles cleanup automatically on its own schedule; forcing it during shutdown is unnecessary.
StyleLearningReview ORM uses different field names (type, few_shots_content, description) than the frontend expects (update_type, proposed_content, confidence_score). The pending review path already transforms these fields, but the reviewed history path passed raw facade data through unchanged, causing empty content, "unknown" type, and 0% confidence display.
Reviewer's GuideStability-focused release that hardens DB/session handling (SQLite pooling, MySQL SSL, ORM mappings), fixes multiple WebUI persona/style review and metrics issues, adds defensive guards in commands and services, improves data integrity (jargon, batch learning), introduces configurable WebUI host, and significantly expands unit test coverage for new core/quality components. Sequence diagram for get_reviewed_persona_updates WebUI aggregationsequenceDiagram
actor User
participant Browser as Browser
participant PersonaReviewsAPI as PersonaReviewsAPI
participant PersonaReviewService as PersonaReviewService
participant PersonaUpdater as PersonaUpdater
participant DatabaseManager as DatabaseManager
User->>Browser: Open reviewed persona updates page
Browser->>PersonaReviewsAPI: GET /persona/reviews/reviewed
PersonaReviewsAPI->>PersonaReviewService: get_reviewed_persona_updates(limit, offset, status_filter)
PersonaReviewService->>PersonaReviewService: reviewed_updates = []
alt persona_updater available
PersonaReviewService->>PersonaUpdater: get_reviewed_persona_updates(limit, offset, status_filter)
PersonaUpdater-->>PersonaReviewService: traditional_updates
alt traditional_updates not empty
PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(traditional_updates)
else
PersonaReviewService->>PersonaReviewService: skip extend
end
else persona_updater missing or raises
PersonaReviewService->>PersonaReviewService: log warning and continue
end
alt database_manager available for persona learning
PersonaReviewService->>DatabaseManager: get_reviewed_persona_learning_updates(limit, offset, status_filter)
DatabaseManager-->>PersonaReviewService: persona_learning_updates
alt persona_learning_updates not empty
PersonaReviewService->>PersonaReviewService: prefix id with persona_learning_
PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(persona_learning_updates)
else
PersonaReviewService->>PersonaReviewService: skip extend
end
else error fetching persona learning
PersonaReviewService->>PersonaReviewService: log warning and continue
end
alt database_manager available for style learning
PersonaReviewService->>DatabaseManager: get_reviewed_style_learning_updates(limit, offset, status_filter)
DatabaseManager-->>PersonaReviewService: style_updates
alt style_updates not empty
PersonaReviewService->>PersonaReviewService: normalize fields to unified frontend format
PersonaReviewService->>PersonaReviewService: prefix id with style_
PersonaReviewService->>PersonaReviewService: set review_source = style_learning
PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(style_updates)
else
PersonaReviewService->>PersonaReviewService: skip extend
end
else error fetching style learning
PersonaReviewService->>PersonaReviewService: log warning and continue
end
PersonaReviewService->>PersonaReviewService: filter out entries with missing or None id
PersonaReviewService->>PersonaReviewService: sort reviewed_updates by review_time desc
PersonaReviewService-->>PersonaReviewsAPI: { success: True, data: reviewed_updates }
PersonaReviewsAPI-->>Browser: JSON response
Browser-->>User: Render reviewed persona updates list
Sequence diagram for admin commands with bootstrap null guardssequenceDiagram
actor Admin
participant BotPlatform as BotPlatform
participant MainPlugin as MainPlugin
participant CommandHandlers as CommandHandlers
Admin->>BotPlatform: Send command (e.g. /learning_status)
BotPlatform->>MainPlugin: learning_status_command(event)
alt _command_handlers is None (bootstrap failed)
MainPlugin-->>BotPlatform: yield plain_result("插件服务未就绪,请检查启动日志")
BotPlatform-->>Admin: Show friendly error
else _command_handlers initialized
MainPlugin->>CommandHandlers: learning_status(event)
CommandHandlers-->>MainPlugin: async results
MainPlugin-->>BotPlatform: yield results
BotPlatform-->>Admin: Show command output
end
Entity relationship diagram for psychological state, history, filtered messages, and jargonerDiagram
COMPOSITE_PSYCHOLOGICAL_STATE {
int id
string group_id
string user_id
string overall_state
float overall_intensity
int created_at
}
PSYCHOLOGICAL_STATE_COMPONENT {
int id
int composite_state_id
string group_id
string state_id
string category
string state_type
float value
float threshold
float start_time
string description
}
PSYCHOLOGICAL_STATE_HISTORY {
int id
string group_id
string state_id
string category
string old_state_type
string new_state_type
float old_value
float new_value
string change_reason
int timestamp
}
FILTERED_MESSAGE {
int id
int raw_message_id
string message
string sender_id
string group_id
int timestamp
float confidence
string filter_reason
}
JARGON {
int id
string content
string meaning_json
bool is_jargon
int count
}
COMPOSITE_PSYCHOLOGICAL_STATE ||--o{ PSYCHOLOGICAL_STATE_COMPONENT : has_components
COMPOSITE_PSYCHOLOGICAL_STATE ||--o{ PSYCHOLOGICAL_STATE_HISTORY : has_history
Class diagram for enhanced psychological state management and progressive learningclassDiagram
class PsychologicalRepository {
+async get_components(state_id int) List~PsychologicalStateComponent~
+async update_component(state_id int, component_name str, value float, threshold float, group_id str, state_id_str str) PsychologicalStateComponent
+async add_history(state_id int, from_state str, to_state str, trigger_event str, intensity_change float, group_id str, category str) PsychologicalStateHistory
+async find_many(composite_state_id int) List~PsychologicalStateComponent~
+async find_one(composite_state_id int, category str) PsychologicalStateComponent
+async create(...)
+async update(component PsychologicalStateComponent) PsychologicalStateComponent
}
class PsychologicalStateComponent {
+str category
+str state_type
+float value
+float threshold
+str description
+float start_time
}
class PsychologicalStateHistory {
+str group_id
+str state_id
+str category
+str old_state_type
+str new_state_type
+float old_value
+float new_value
+str change_reason
+int timestamp
}
class CompositePsychologicalState {
+str group_id
+str user_id
+str overall_state
+float overall_intensity
+List~PsychologicalStateComponent~ components
}
class EnhancedPsychologicalStateManager {
-logger
+async get_current_state(group_id str, user_id str) CompositePsychologicalState
+async update_state(group_id str, user_id str, dimension str, new_value float, trigger_event str)
}
class ProgressiveLearning {
-message_collector MessageCollector
+async _execute_learning_batch(group_id str, relearn_mode bool)
}
class MessageCollector {
+async add_filtered_message(data dict)
}
EnhancedPsychologicalStateManager --> PsychologicalRepository : uses
EnhancedPsychologicalStateManager o--> CompositePsychologicalState : builds
CompositePsychologicalState o--> PsychologicalStateComponent : aggregates
PsychologicalRepository --> PsychologicalStateComponent : persists
PsychologicalRepository --> PsychologicalStateHistory : persists
ProgressiveLearning --> MessageCollector : saves_filtered_messages
MessageCollector --> FILTERED_MESSAGE : writes
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 6 issues, and left some high level feedback:
- In
_ensure_mysql_database_exists, theaiomysqlimport was moved outside the try/except so an ImportError will now bypass your logging and error message; consider keeping the import inside the try block (or adding a dedicated handler) so missing-driver issues are surfaced consistently. - In
progressive_learning._execute_learning_batch, theadd_filtered_messagecall is wrapped inexcept Exception: pass, which will silently hide persistent DB/logic errors; consider at least logging these exceptions at debug/warning level to aid diagnosis while still keeping the best-effort behavior.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_ensure_mysql_database_exists`, the `aiomysql` import was moved outside the try/except so an ImportError will now bypass your logging and error message; consider keeping the import inside the try block (or adding a dedicated handler) so missing-driver issues are surfaced consistently.
- In `progressive_learning._execute_learning_batch`, the `add_filtered_message` call is wrapped in `except Exception: pass`, which will silently hide persistent DB/logic errors; consider at least logging these exceptions at debug/warning level to aid diagnosis while still keeping the best-effort behavior.
## Individual Comments
### Comment 1
<location path="services/database/facades/_base.py" line_range="35-41" />
<code_context>
+ 使用 ``async with session`` 确保事务完整性,
+ 不再在 finally 中重复调用 close 以避免连接状态异常。
+ """
+ if self.engine is None or self.engine.engine is None:
+ raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
try:
async with session:
yield session
- finally:
- await session.close()
+ except Exception:
+ raise
</code_context>
<issue_to_address>
**suggestion:** The try/except that only re-raises adds complexity without benefit and can be removed.
Since `async with session:` already manages commit/rollback/close via the context manager, the surrounding `try/except Exception: raise` is redundant and hurts readability. You can simplify to:
```python
if self.engine is None or self.engine.engine is None:
raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
yield session
```
This preserves the engine check and lets exceptions propagate naturally.
Suggested implementation:
```python
@asynccontextmanager
async def get_session(self):
"""获取异步数据库会话(上下文管理器)
自动处理会话的创建、提交和回滚。
使用 ``async with session`` 确保事务完整性,
不再在 finally 中重复调用 close 以避免连接状态异常。
"""
if self.engine is None or self.engine.engine is None:
raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
yield session
```
If there are other places in this file or project that relied on the old behavior of catching and re-raising exceptions (for logging or custom handling), verify that they don't expect that behavior here. Since the original `except Exception: raise` was a no-op, this should be a safe, behavior-preserving simplification.
</issue_to_address>
### Comment 2
<location path="services/database/sqlalchemy_database_manager.py" line_range="225" />
<code_context>
@asynccontextmanager
async def get_session(self):
- """获取异步数据库会话(上下文管理器)"""
+ """获取异步数据库会话(上下文管理器)
+
+ 自动处理会话的创建、提交和回滚。
+ 使用 ``async with session`` 确保事务完整性,
+ 不再在 finally 中重复调用 close 以避免连接状态异常。
+ """
+ if self.engine is None or self.engine.engine is None:
+ raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
try:
async with session:
yield session
- finally:
- await session.close()
+ except Exception:
+ raise
</code_context>
<issue_to_address>
**suggestion:** Similar to the facade, the try/except that only re-raises in get_session can be removed to simplify control flow.
The session context manager already manages the transaction lifecycle and will propagate exceptions to callers. You can rely on `async with session: yield session` without the redundant `try/except` wrapper.
</issue_to_address>
### Comment 3
<location path="services/core_learning/progressive_learning.py" line_range="255-264" />
<code_context>
+ saved_count = 0
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Swallowing all exceptions when saving filtered messages hides potential systemic issues; consider at least logging them.
Given the best-effort semantics, it’s fine not to fail the whole flow, but a bare `except Exception: pass` makes persistent issues (schema changes, connectivity problems) undetectable. Consider catching `Exception` but logging at debug/warning level so repeated failures in the filtered-message path are visible while keeping the operation non-blocking.
Suggested implementation:
```python
saved_count = 0
for msg in filtered_messages:
try:
await self.message_collector.add_filtered_message({
"raw_message_id": msg.get("id"),
"message": msg.get("message", ""),
"sender_id": msg.get("sender_id", ""),
"group_id": msg.get("group_id", group_id),
"timestamp": msg.get("timestamp", int(time.time())),
"confidence": msg.get("relevance_score", 1.0),
"filter_reason": msg.get("filter_reason", "batch_learning"),
```
```python
except Exception:
logger.warning(
"保存筛选消息失败 raw_message_id=%s,已跳过,不影响整体流程",
msg.get("id"),
exc_info=True,
)
```
</issue_to_address>
### Comment 4
<location path="tests/unit/test_config.py" line_range="34" />
<code_context>
+ assert config.enable_realtime_learning is False
+ assert config.enable_web_interface is True
+ assert config.web_interface_port == 7833
+ assert config.web_interface_host == "0.0.0.0"
+
+ def test_create_default_classmethod(self):
</code_context>
<issue_to_address>
**suggestion (testing):** Add a config-driven test for `web_interface_host` to cover the new option end-to-end
This suite only checks the default `web_interface_host`. Since it’s now configurable, please add a `create_from_config` test that sets `Self_Learning_Basic['web_interface_host']` (e.g. `'127.0.0.1'`) and asserts the value is honored. This will cover the config parsing path and help prevent regressions as the config schema changes.
Suggested implementation:
```python
def test_create_default_classmethod(self):
"""Test the create_default classmethod."""
config = PluginConfig.create_default()
assert isinstance(config, PluginConfig)
assert config.learning_interval_hours == 6
assert config.min_messages_for_learning == 50
assert config.max_messages_per_batch == 200
def test_create_from_config_web_interface_host(self):
"""Test that web_interface_host is loaded from config."""
# Arrange: build a config section with an overridden web_interface_host
config_parser = configparser.ConfigParser()
config_parser["Self_Learning_Basic"] = {
"web_interface_host": "127.0.0.1",
}
# Act: create PluginConfig from the config section
config = PluginConfig.create_from_config(
config_parser["Self_Learning_Basic"]
)
# Assert: the configured host is honored
assert config.web_interface_host == "127.0.0.1"
def test_default_learning_parameters(self):
"""Test default learning parameter values."""
```
1. Ensure `configparser` is imported at the top of `tests/unit/test_config.py`, e.g. `import configparser`, or reuse any existing config/fixture helper already present in this test module to construct the `"Self_Learning_Basic"` section instead of instantiating `ConfigParser` directly.
2. If `PluginConfig.create_from_config` expects a full `ConfigParser` rather than a section mapping, adjust the call accordingly, e.g. pass the entire `config_parser` or use the existing calling convention already used elsewhere in the test file.
</issue_to_address>
### Comment 5
<location path="tests/unit/test_interfaces.py" line_range="244" />
<code_context>
+
+ def test_enum_count(self):
+ """Test the total number of analysis types."""
+ assert len(AnalysisType) == 5
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding unit tests for the new admin command null-guards to prove the user-facing behavior
These changes add null-guards to all 6 admin commands (e.g. `learning_status_command`, `start_learning_command`, `set_mood_command`) so they return a friendly "插件服务未就绪"-style message when `_command_handlers` is `None`, but this behavior isn’t tested. Please add a focused test module (e.g. `tests/unit/test_admin_commands_null_guard.py`) that:
- Constructs the relevant entry/handler with `_command_handlers = None`.
- Calls each admin command and asserts the first yielded result is the expected fallback message, rather than raising or touching the handler.
This will ensure the intended UX is preserved when services fail to bootstrap.
Suggested implementation:
```python
import inspect
from typing import Any, Callable
import pytest
EXPECTED_NULL_GUARD_MESSAGE = "插件服务未就绪"
try:
# NOTE: Adjust this import to match your real admin commands module/classes.
# The tests will be skipped entirely if this import fails, so it will not
# break your suite until you wire it up correctly.
from app.admin_commands import (
LearningStatusAdminEntry,
StartLearningAdminEntry,
SetMoodAdminEntry,
StopLearningAdminEntry,
ResetConversationAdminEntry,
DebugInfoAdminEntry,
)
except ImportError: # pragma: no cover
pytest.skip("Admin command implementations not available for null-guard tests.", allow_module_level=True)
async def _get_first_yielded(command_callable: Callable[..., Any]) -> Any:
"""Invoke a command and return the first yielded result, supporting sync/async generators."""
result = command_callable()
# Async generator: e.g. async def foo(...): yield ...
if inspect.isasyncgen(result):
return await result.__anext__()
# Sync generator: e.g. def foo(...): yield ...
if inspect.isgenerator(result):
return next(result)
# Fall back: function returns a plain value instead of a generator
return result
@pytest.mark.asyncio
async def test_learning_status_command_null_guard() -> None:
entry = LearningStatusAdminEntry()
# Force the null-guard path
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.learning_status_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_start_learning_command_null_guard() -> None:
entry = StartLearningAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.start_learning_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_set_mood_command_null_guard() -> None:
entry = SetMoodAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.set_mood_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_stop_learning_command_null_guard() -> None:
entry = StopLearningAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.stop_learning_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_reset_conversation_command_null_guard() -> None:
entry = ResetConversationAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.reset_conversation_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_debug_info_command_null_guard() -> None:
entry = DebugInfoAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.debug_info_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
```
To wire these tests into your actual codebase, you’ll need to:
1. Update the `from app.admin_commands import ...` statement to point at the real module and classes that expose the 6 admin commands with the `_command_handlers` attribute.
2. Adjust the class names (`LearningStatusAdminEntry`, `StartLearningAdminEntry`, etc.) to match your actual entry/handler types.
3. Ensure the method names (`learning_status_command`, `start_learning_command`, `set_mood_command`, `stop_learning_command`, `reset_conversation_command`, `debug_info_command`) match your implementations.
4. Confirm the expected null-guard message; if your implementation uses a slightly different string than `"插件服务未就绪"`, update `EXPECTED_NULL_GUARD_MESSAGE` accordingly.
5. If your commands are strictly synchronous (not async generators), you can drop `pytest.mark.asyncio` and simplify `_get_first_yielded` to only handle sync generators.
</issue_to_address>
### Comment 6
<location path="tests/unit/test_json_utils.py" line_range="243" />
<code_context>
+
+@pytest.mark.unit
+@pytest.mark.utils
+class TestSafeParseLlmJson:
+ """Test end-to-end safe JSON parsing."""
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add WebUI API tests for persona review endpoints to cover the new request-body validation paths
The new `request.get_json()` handling in `webui/blueprints/persona_reviews.py` for `review_persona_update`, the `batch_*` routes, and `revert_persona_update` isn’t currently covered by tests. Please add async WebUI API tests (e.g. `tests/unit/test_persona_reviews_api.py`) using the test client to verify:
- `POST /review_persona_update/<id>` returns 400 with `{ "error": "Request body is required" }` when the JSON body is missing.
- `POST /batch_delete_persona_updates` and `POST /batch_review_persona_updates` show the same behavior for missing or malformed bodies.
- `POST /revert_persona_update/<id>` accepts an empty body and still uses the default reason.
This will validate the crash fix and request-body validation paths end-to-end.
Suggested implementation:
```python
import pytest
@pytest.mark.unit
@pytest.mark.asyncio
class TestPersonaReviewRequestBodyValidation:
async def test_review_persona_update_missing_body_returns_400(
self, webui_api_client
):
"""POST /review_persona_update/<id> returns 400 when JSON body is missing."""
response = await webui_api_client.post("/review_persona_update/123")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_delete_persona_updates_missing_body_returns_400(
self, webui_api_client
):
"""POST /batch_delete_persona_updates returns 400 when JSON body is missing."""
response = await webui_api_client.post("/batch_delete_persona_updates")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_review_persona_updates_missing_body_returns_400(
self, webui_api_client
):
"""POST /batch_review_persona_updates returns 400 when JSON body is missing."""
response = await webui_api_client.post("/batch_review_persona_updates")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_delete_persona_updates_malformed_body_returns_400(
self, webui_api_client
):
"""POST /batch_delete_persona_updates returns 400 for malformed JSON body."""
# Send a body that is not valid JSON or not a valid object
response = await webui_api_client.post(
"/batch_delete_persona_updates",
data="this is not json",
headers={"Content-Type": "application/json"},
)
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_review_persona_updates_malformed_body_returns_400(
self, webui_api_client
):
"""POST /batch_review_persona_updates returns 400 for malformed JSON body."""
response = await webui_api_client.post(
"/batch_review_persona_updates",
data="this is not json",
headers={"Content-Type": "application/json"},
)
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_revert_persona_update_allows_empty_body_and_uses_default_reason(
self, webui_api_client
):
"""
POST /revert_persona_update/<id> accepts an empty body and still uses the
default reason on the server side.
"""
# No JSON body at all
response = await webui_api_client.post("/revert_persona_update/123")
# The exact status code and response payload depend on the implementation;
# we only assert that it does not fail request-body validation.
assert response.status_code != 400
data = await response.get_json()
# The handler should have used the default reason internally; if the API
# echoes it back, we can assert on that key. Adjust as needed:
# assert data.get("reason") == DEFAULT_REVERT_REASON
```
1. Replace the `webui_api_client` fixture name with the actual async WebUI test client fixture used in your test suite (for example, `async_client`, `client`, or similar). Ensure that this fixture returns an object with `post(...)`, `status_code`, and `get_json()` (or equivalent) async methods.
2. If your project does not use `pytest-asyncio` and instead wraps async tests differently, adjust the `@pytest.mark.asyncio` marker and method definitions accordingly (e.g., make tests sync and use a sync Flask test client).
3. Align endpoint paths (`/review_persona_update/<id>`, `/batch_delete_persona_updates`, `/batch_review_persona_updates`, `/revert_persona_update/<id>`) and the exact error payload (`{"error": "Request body is required"}`) with the actual routes and error messages in `webui/blueprints/persona_reviews.py`.
4. For `test_revert_persona_update_allows_empty_body_and_uses_default_reason`, if the API explicitly returns the default reason in the response, replace the commented assertion with a concrete check against the actual response schema (e.g., a `reason` or `message` field).
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if self.engine is None or self.engine.engine is None: | ||
| raise RuntimeError("数据库引擎未初始化或已关闭") | ||
| session = self.engine.get_session() | ||
| try: | ||
| async with session: | ||
| yield session | ||
| finally: | ||
| await session.close() | ||
| except Exception: |
There was a problem hiding this comment.
suggestion: The try/except that only re-raises adds complexity without benefit and can be removed.
Since async with session: already manages commit/rollback/close via the context manager, the surrounding try/except Exception: raise is redundant and hurts readability. You can simplify to:
if self.engine is None or self.engine.engine is None:
raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
yield sessionThis preserves the engine check and lets exceptions propagate naturally.
Suggested implementation:
@asynccontextmanager
async def get_session(self):
"""获取异步数据库会话(上下文管理器)
自动处理会话的创建、提交和回滚。
使用 ``async with session`` 确保事务完整性,
不再在 finally 中重复调用 close 以避免连接状态异常。
"""
if self.engine is None or self.engine.engine is None:
raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
yield sessionIf there are other places in this file or project that relied on the old behavior of catching and re-raising exceptions (for logging or custom handling), verify that they don't expect that behavior here. Since the original except Exception: raise was a no-op, this should be a safe, behavior-preserving simplification.
| yield session | ||
| finally: | ||
| await session.close() | ||
| except Exception: |
There was a problem hiding this comment.
suggestion: Similar to the facade, the try/except that only re-raises in get_session can be removed to simplify control flow.
The session context manager already manages the transaction lifecycle and will propagate exceptions to callers. You can rely on async with session: yield session without the redundant try/except wrapper.
| saved_count = 0 | ||
| for msg in filtered_messages: | ||
| try: | ||
| await self.message_collector.add_filtered_message({ | ||
| "raw_message_id": msg.get("id"), | ||
| "message": msg.get("message", ""), | ||
| "sender_id": msg.get("sender_id", ""), | ||
| "group_id": msg.get("group_id", group_id), | ||
| "timestamp": msg.get("timestamp", int(time.time())), | ||
| "confidence": msg.get("relevance_score", 1.0), |
There was a problem hiding this comment.
suggestion (bug_risk): Swallowing all exceptions when saving filtered messages hides potential systemic issues; consider at least logging them.
Given the best-effort semantics, it’s fine not to fail the whole flow, but a bare except Exception: pass makes persistent issues (schema changes, connectivity problems) undetectable. Consider catching Exception but logging at debug/warning level so repeated failures in the filtered-message path are visible while keeping the operation non-blocking.
Suggested implementation:
saved_count = 0
for msg in filtered_messages:
try:
await self.message_collector.add_filtered_message({
"raw_message_id": msg.get("id"),
"message": msg.get("message", ""),
"sender_id": msg.get("sender_id", ""),
"group_id": msg.get("group_id", group_id),
"timestamp": msg.get("timestamp", int(time.time())),
"confidence": msg.get("relevance_score", 1.0),
"filter_reason": msg.get("filter_reason", "batch_learning"), except Exception:
logger.warning(
"保存筛选消息失败 raw_message_id=%s,已跳过,不影响整体流程",
msg.get("id"),
exc_info=True,
)| assert config.enable_realtime_learning is False | ||
| assert config.enable_web_interface is True | ||
| assert config.web_interface_port == 7833 | ||
| assert config.web_interface_host == "0.0.0.0" |
There was a problem hiding this comment.
suggestion (testing): Add a config-driven test for web_interface_host to cover the new option end-to-end
This suite only checks the default web_interface_host. Since it’s now configurable, please add a create_from_config test that sets Self_Learning_Basic['web_interface_host'] (e.g. '127.0.0.1') and asserts the value is honored. This will cover the config parsing path and help prevent regressions as the config schema changes.
Suggested implementation:
def test_create_default_classmethod(self):
"""Test the create_default classmethod."""
config = PluginConfig.create_default()
assert isinstance(config, PluginConfig)
assert config.learning_interval_hours == 6
assert config.min_messages_for_learning == 50
assert config.max_messages_per_batch == 200
def test_create_from_config_web_interface_host(self):
"""Test that web_interface_host is loaded from config."""
# Arrange: build a config section with an overridden web_interface_host
config_parser = configparser.ConfigParser()
config_parser["Self_Learning_Basic"] = {
"web_interface_host": "127.0.0.1",
}
# Act: create PluginConfig from the config section
config = PluginConfig.create_from_config(
config_parser["Self_Learning_Basic"]
)
# Assert: the configured host is honored
assert config.web_interface_host == "127.0.0.1"
def test_default_learning_parameters(self):
"""Test default learning parameter values."""- Ensure
configparseris imported at the top oftests/unit/test_config.py, e.g.import configparser, or reuse any existing config/fixture helper already present in this test module to construct the"Self_Learning_Basic"section instead of instantiatingConfigParserdirectly. - If
PluginConfig.create_from_configexpects a fullConfigParserrather than a section mapping, adjust the call accordingly, e.g. pass the entireconfig_parseror use the existing calling convention already used elsewhere in the test file.
|
|
||
| def test_enum_count(self): | ||
| """Test the total number of analysis types.""" | ||
| assert len(AnalysisType) == 5 |
There was a problem hiding this comment.
suggestion (testing): Consider adding unit tests for the new admin command null-guards to prove the user-facing behavior
These changes add null-guards to all 6 admin commands (e.g. learning_status_command, start_learning_command, set_mood_command) so they return a friendly "插件服务未就绪"-style message when _command_handlers is None, but this behavior isn’t tested. Please add a focused test module (e.g. tests/unit/test_admin_commands_null_guard.py) that:
- Constructs the relevant entry/handler with
_command_handlers = None. - Calls each admin command and asserts the first yielded result is the expected fallback message, rather than raising or touching the handler.
This will ensure the intended UX is preserved when services fail to bootstrap.
Suggested implementation:
import inspect
from typing import Any, Callable
import pytest
EXPECTED_NULL_GUARD_MESSAGE = "插件服务未就绪"
try:
# NOTE: Adjust this import to match your real admin commands module/classes.
# The tests will be skipped entirely if this import fails, so it will not
# break your suite until you wire it up correctly.
from app.admin_commands import (
LearningStatusAdminEntry,
StartLearningAdminEntry,
SetMoodAdminEntry,
StopLearningAdminEntry,
ResetConversationAdminEntry,
DebugInfoAdminEntry,
)
except ImportError: # pragma: no cover
pytest.skip("Admin command implementations not available for null-guard tests.", allow_module_level=True)
async def _get_first_yielded(command_callable: Callable[..., Any]) -> Any:
"""Invoke a command and return the first yielded result, supporting sync/async generators."""
result = command_callable()
# Async generator: e.g. async def foo(...): yield ...
if inspect.isasyncgen(result):
return await result.__anext__()
# Sync generator: e.g. def foo(...): yield ...
if inspect.isgenerator(result):
return next(result)
# Fall back: function returns a plain value instead of a generator
return result
@pytest.mark.asyncio
async def test_learning_status_command_null_guard() -> None:
entry = LearningStatusAdminEntry()
# Force the null-guard path
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.learning_status_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_start_learning_command_null_guard() -> None:
entry = StartLearningAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.start_learning_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_set_mood_command_null_guard() -> None:
entry = SetMoodAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.set_mood_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_stop_learning_command_null_guard() -> None:
entry = StopLearningAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.stop_learning_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_reset_conversation_command_null_guard() -> None:
entry = ResetConversationAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.reset_conversation_command)
assert first == EXPECTED_NULL_GUARD_MESSAGE
@pytest.mark.asyncio
async def test_debug_info_command_null_guard() -> None:
entry = DebugInfoAdminEntry()
entry._command_handlers = None # type: ignore[attr-defined]
first = await _get_first_yielded(entry.debug_info_command)
assert first == EXPECTED_NULL_GUARD_MESSAGETo wire these tests into your actual codebase, you’ll need to:
- Update the
from app.admin_commands import ...statement to point at the real module and classes that expose the 6 admin commands with the_command_handlersattribute. - Adjust the class names (
LearningStatusAdminEntry,StartLearningAdminEntry, etc.) to match your actual entry/handler types. - Ensure the method names (
learning_status_command,start_learning_command,set_mood_command,stop_learning_command,reset_conversation_command,debug_info_command) match your implementations. - Confirm the expected null-guard message; if your implementation uses a slightly different string than
"插件服务未就绪", updateEXPECTED_NULL_GUARD_MESSAGEaccordingly. - If your commands are strictly synchronous (not async generators), you can drop
pytest.mark.asyncioand simplify_get_first_yieldedto only handle sync generators.
|
|
||
| @pytest.mark.unit | ||
| @pytest.mark.utils | ||
| class TestSafeParseLlmJson: |
There was a problem hiding this comment.
suggestion (testing): Add WebUI API tests for persona review endpoints to cover the new request-body validation paths
The new request.get_json() handling in webui/blueprints/persona_reviews.py for review_persona_update, the batch_* routes, and revert_persona_update isn’t currently covered by tests. Please add async WebUI API tests (e.g. tests/unit/test_persona_reviews_api.py) using the test client to verify:
POST /review_persona_update/<id>returns 400 with{ "error": "Request body is required" }when the JSON body is missing.POST /batch_delete_persona_updatesandPOST /batch_review_persona_updatesshow the same behavior for missing or malformed bodies.POST /revert_persona_update/<id>accepts an empty body and still uses the default reason.
This will validate the crash fix and request-body validation paths end-to-end.
Suggested implementation:
import pytest
@pytest.mark.unit
@pytest.mark.asyncio
class TestPersonaReviewRequestBodyValidation:
async def test_review_persona_update_missing_body_returns_400(
self, webui_api_client
):
"""POST /review_persona_update/<id> returns 400 when JSON body is missing."""
response = await webui_api_client.post("/review_persona_update/123")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_delete_persona_updates_missing_body_returns_400(
self, webui_api_client
):
"""POST /batch_delete_persona_updates returns 400 when JSON body is missing."""
response = await webui_api_client.post("/batch_delete_persona_updates")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_review_persona_updates_missing_body_returns_400(
self, webui_api_client
):
"""POST /batch_review_persona_updates returns 400 when JSON body is missing."""
response = await webui_api_client.post("/batch_review_persona_updates")
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_delete_persona_updates_malformed_body_returns_400(
self, webui_api_client
):
"""POST /batch_delete_persona_updates returns 400 for malformed JSON body."""
# Send a body that is not valid JSON or not a valid object
response = await webui_api_client.post(
"/batch_delete_persona_updates",
data="this is not json",
headers={"Content-Type": "application/json"},
)
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_batch_review_persona_updates_malformed_body_returns_400(
self, webui_api_client
):
"""POST /batch_review_persona_updates returns 400 for malformed JSON body."""
response = await webui_api_client.post(
"/batch_review_persona_updates",
data="this is not json",
headers={"Content-Type": "application/json"},
)
assert response.status_code == 400
data = await response.get_json()
assert data == {"error": "Request body is required"}
async def test_revert_persona_update_allows_empty_body_and_uses_default_reason(
self, webui_api_client
):
"""
POST /revert_persona_update/<id> accepts an empty body and still uses the
default reason on the server side.
"""
# No JSON body at all
response = await webui_api_client.post("/revert_persona_update/123")
# The exact status code and response payload depend on the implementation;
# we only assert that it does not fail request-body validation.
assert response.status_code != 400
data = await response.get_json()
# The handler should have used the default reason internally; if the API
# echoes it back, we can assert on that key. Adjust as needed:
# assert data.get("reason") == DEFAULT_REVERT_REASON- Replace the
webui_api_clientfixture name with the actual async WebUI test client fixture used in your test suite (for example,async_client,client, or similar). Ensure that this fixture returns an object withpost(...),status_code, andget_json()(or equivalent) async methods. - If your project does not use
pytest-asyncioand instead wraps async tests differently, adjust the@pytest.mark.asynciomarker and method definitions accordingly (e.g., make tests sync and use a sync Flask test client). - Align endpoint paths (
/review_persona_update/<id>,/batch_delete_persona_updates,/batch_review_persona_updates,/revert_persona_update/<id>) and the exact error payload ({"error": "Request body is required"}) with the actual routes and error messages inwebui/blueprints/persona_reviews.py. - For
test_revert_persona_update_allows_empty_body_and_uses_default_reason, if the API explicitly returns the default reason in the response, replace the commented assertion with a concrete check against the actual response schema (e.g., areasonormessagefield).
Summary / 概述
This release focuses on stability, reliability, and data integrity improvements across the database layer, WebUI, command system, and plugin lifecycle.
本次发布专注于数据库层、WebUI、命令系统和插件生命周期的稳定性、可靠性和数据完整性改进。
Changes / 变更内容
Database / 数据库
StaticPoolwithNullPoolto eliminate transaction state corruption under concurrent WebUI requests / 将 SQLite 连接池从StaticPool替换为NullPool,消除 WebUI 并发请求导致的事务状态污染ssl.SSLErrorconnection failures; harden session lifecycle / 禁用 MySQL 8 默认 SSL 握手,修复连接失败;强化会话生命周期WebUI
gc.collect()calls in shutdown path that each took 80+ seconds traversing ~200 module object graphs / 移除关停路径中两次gc.collect()调用,每次遍历约 200 个模块对象图耗时 80+ 秒web_interface_hostconfig option / 新增 WebUI 监听地址配置项Commands / 命令
bootstrap()fails / 全部 6 个管理命令添加空值守卫Data Integrity / 数据完整性
Lifecycle / 生命周期
background_tasksaccess during shutdown sequence / 关停序列中防护后台任务访问Tests / 测试
Test Plan / 测试计划
Summary by Sourcery
Release version Next-2.0.6 with stability, data integrity, and WebUI reliability improvements across database, learning, persona review, and command handling components.
New Features:
Bug Fixes:
Enhancements:
Build:
Documentation:
Tests: