Skip to content

Release Next-2.0.6: Stability & Reliability Fixes#66

Merged
NickCharlie merged 16 commits intomainfrom
develop
Mar 2, 2026
Merged

Release Next-2.0.6: Stability & Reliability Fixes#66
NickCharlie merged 16 commits intomainfrom
develop

Conversation

@NickCharlie
Copy link
Owner

@NickCharlie NickCharlie commented Mar 2, 2026

Summary / 概述

This release focuses on stability, reliability, and data integrity improvements across the database layer, WebUI, command system, and plugin lifecycle.

本次发布专注于数据库层、WebUI、命令系统和插件生命周期的稳定性、可靠性和数据完整性改进。

Changes / 变更内容

Database / 数据库

  • SQLite concurrency: Replace StaticPool with NullPool to eliminate transaction state corruption under concurrent WebUI requests / 将 SQLite 连接池从 StaticPool 替换为 NullPool,消除 WebUI 并发请求导致的事务状态污染
  • MySQL 8 SSL: Disable default SSL handshake to resolve ssl.SSLError connection failures; harden session lifecycle / 禁用 MySQL 8 默认 SSL 握手,修复连接失败;强化会话生命周期
  • ORM field mappings: Correct psychological state, mood persistence, and component mapping field names / 修正心理状态、情绪持久化和组件映射的 ORM 字段名

WebUI

  • Plugin uninstall CPU spike: Remove two gc.collect() calls in shutdown path that each took 80+ seconds traversing ~200 module object graphs / 移除关停路径中两次 gc.collect() 调用,每次遍历约 200 个模块对象图耗时 80+ 秒
  • Persona review system: Fix revert crash, reviewed list empty fields, and style learning field mapping to unified frontend format / 修复审查撤回崩溃、已审查列表字段缺失、风格学习字段映射
  • Style statistics: Route through Facade instead of direct Repository to avoid session lifecycle mismatch / 风格统计改用 Facade 路由
  • Default persona: Use global default persona instead of random UMO selection / 使用全局默认人格替代随机 UMO
  • Response speed metric: Use neutral fallback when no LLM data available / 无 LLM 数据时使用中性回退值
  • Configurable listen address: Add web_interface_host config option / 新增 WebUI 监听地址配置项

Commands / 命令

  • Null guard: Add null checks for all 6 admin command handlers when bootstrap() fails / 全部 6 个管理命令添加空值守卫

Data Integrity / 数据完整性

  • Jargon serialization: Serialize dict/list meaning to JSON string before DB write / 黑话 meaning 字段序列化为 JSON
  • Batch learning: Persist filtered messages to DB in batch learning path / 批量学习路径中保存筛选后消息到数据库

Lifecycle / 生命周期

  • Shutdown safety: Guard background_tasks access during shutdown sequence / 关停序列中防护后台任务访问

Tests / 测试

  • Add unit tests for core modules (config, constants, exceptions, interfaces, patterns, cache, utils) / 新增核心模块单元测试

Test Plan / 测试计划

  • Plugin install/uninstall without CPU spike
  • WebUI persona review list displays correct fields
  • All 6 admin commands return friendly message when services unavailable
  • SQLite concurrent dashboard requests handled without errors
  • Unit tests pass

Summary by Sourcery

Release version Next-2.0.6 with stability, data integrity, and WebUI reliability improvements across database, learning, persona review, and command handling components.

New Features:

  • Add configurable WebUI listen address via the web_interface_host option.
  • Introduce learning quality monitoring, tiered learning trigger, caching, JSON utilities, security utilities, and guardrail validation models with dedicated unit tests.

Bug Fixes:

  • Harden MySQL and SQLite database handling, including SSL disablement, stricter engine/session lifecycle checks, and ORM field mapping fixes for psychological state and mood data.
  • Fix WebUI persona review flows by validating request bodies, making traditional/persona/style review sources resilient to failures, normalizing style learning review fields, and filtering invalid records.
  • Resolve crashes and inconsistent behavior in psychological state management by aligning repository field names and defensively converting ORM components to dataclasses.
  • Prevent command crashes when core services or handlers are uninitialized by adding null guards and user-friendly messages for admin commands.
  • Improve LLM interaction stability by guarding against None contexts and making jargon meaning serialization robust to dict/list inputs.
  • Eliminate shutdown-related issues by removing expensive GC calls and making background services and task access safer.

Enhancements:

  • Record filtered batch-learning messages for later statistics, improve jargon group responses for WebUI, and adjust bot mood persistence to tolerate refresh issues.
  • Refine WebUI metrics so response speed uses a neutral fallback when no LLM data exists, and route style learning statistics through a facade when available.
  • Strengthen login and security ergonomics with password hashing, login attempt tracking, validation helpers, and password migration utilities.
  • Expand constants-based normalization and review source classification for persona updates to support legacy and new update types.
  • Tighten database facade session management and error handling semantics for safer async usage.

Build:

  • Broaden pytest coverage configuration to include core, config, constants, exceptions, utils, and services packages.

Documentation:

  • Update READMEs to advertise version Next-2.0.6 and add sponsorship information.
  • Document release notes for Next-2.0.6 in the changelog, covering key fixes, features, and testing notes.

Tests:

  • Add extensive unit tests for learning quality monitoring, tiered learning triggers, core patterns and interfaces, configuration, JSON and cache utilities, security utilities, guardrail models, constants, and custom exceptions, and broaden coverage settings in pytest configuration.

Batch learning processed messages but never wrote them to the
filtered_messages table. The WebUI filtered count (COUNT(*) on
that table) therefore always stayed at 0. Now persist each
filtered message after the filtering step.
When no LLM call stats exist the fallback avgResponseTime was 2000ms,
yielding a response speed score of 0. Use a neutral 50 instead so the
radar chart does not mislead users into thinking performance is bad.
When multiple provider configs are active simultaneously,
get_default_persona_for_web() used next(iter(dict.values())) to pick
an arbitrary UMO from the group_id_to_unified_origin mapping, causing
the displayed persona to change on every WebUI refresh.

Pass None to get_default_persona_v3() to always return the stable
global default persona. Per-group persona lookup remains available
via get_persona_for_group(group_id).
…persistence

- Fix PsychologicalStateComponent attribute error by mapping to correct
  ORM columns (category/state_type instead of non-existent component_name)
- Fix PsychologicalComponentRepository to use composite_state_id FK
  instead of mismatched state_id string column
- Fix PsychologicalHistoryRepository.add_history to use correct ORM
  field names (old_state_type, new_state_type, change_reason, etc.)
- Fix BotMoodRepository refresh failure after async commit by wrapping
  session.refresh in try/except with debug logging
- Fix NoneType len() error in FrameworkLLMAdapter by normalizing
  contexts parameter to empty list before provider calls
- Fix WebUI jargon dropdown showing undefined by adding group_id and
  group_name fields to get_jargon_groups API response
Add unit tests for config, constants, exceptions, interfaces, patterns,
cache_manager, json_utils, security_utils, guardrails_models,
learning_quality_monitor, and tiered_learning_trigger. Update pytest.ini
to include core, utils, and services in coverage reporting.
…ycle

- Add ssl=False to aiomysql.connect() and engine connect_args to
  prevent struct.unpack failures caused by MySQL 8 default TLS handshake
- Wrap MySQL connection with asyncio.wait_for (15s timeout) for clearer
  error reporting on connection failures
- Remove redundant session.close() in BaseFacade.get_session() and
  DomainRouter.get_session() to avoid double-close issues under
  StaticPool, which caused "Cannot operate on a closed database" errors
- Add engine liveness guard in BaseFacade before yielding sessions
LLM inference occasionally returns the meaning field as a dict or list
instead of a plain string, causing sqlite3.ProgrammingError on UPDATE.

- JargonFacade.update_jargon: convert dict/list meaning to JSON string
- JargonInferenceEngine.infer_meaning: normalize meaning type from both
  inference passes before returning results
Prevents AttributeError when ORM lazy-load fails or schema diverges
from the expected PsychologicalStateComponent fields. Each component
conversion is individually guarded with try/except and diagnostic logging.
- Guard all blueprint routes against None JSON body to prevent
  NoneType.get AttributeError on revert/batch operations
- Add persona_learning_ and style_ ID prefixes to reviewed list entries
  to match the pending list format used by the frontend
- Filter out records with None IDs from the reviewed list
- Wrap each data source fetch in independent try/except so a single
  source failure does not suppress the other sources
Prioritize DomainRouter.get_style_learning_statistics() facade method
instead of directly instantiating StyleLearningReviewRepository, which
avoids session lifecycle mismatch under concurrent dashboard requests.
…t access errors

StaticPool shares a single physical connection across all async sessions,
causing transaction state corruption and "Cannot operate on a closed
database" errors when WebUI dashboard issues concurrent API requests.
NullPool creates an independent connection per session; combined with
WAL mode already in place, this allows safe concurrent reads while
serializing writes. SQLite connection creation cost is negligible
(file handle open), so there is no meaningful performance regression.
Add null checks for _command_handlers in all 6 command wrappers in
main.py to prevent NoneType attribute errors when bootstrap() fails.
Add service availability checks in affection_status (affection_manager)
and set_mood (temporary_persona_updater, affection_manager) handlers
to return user-friendly messages instead of crashing.
…gin unload

Explicit gc.collect() in both Server.stop() and WebUIManager.stop()
forces Python to traverse the entire object graph of the unloading
plugin (~200 modules, circular ORM references). Each call takes 80+
seconds of 100% CPU. Python's cyclic GC handles cleanup automatically
on its own schedule; forcing it during shutdown is unnecessary.
StyleLearningReview ORM uses different field names (type, few_shots_content,
description) than the frontend expects (update_type, proposed_content,
confidence_score). The pending review path already transforms these fields,
but the reviewed history path passed raw facade data through unchanged,
causing empty content, "unknown" type, and 0% confidence display.
@sourcery-ai
Copy link

sourcery-ai bot commented Mar 2, 2026

Reviewer's Guide

Stability-focused release that hardens DB/session handling (SQLite pooling, MySQL SSL, ORM mappings), fixes multiple WebUI persona/style review and metrics issues, adds defensive guards in commands and services, improves data integrity (jargon, batch learning), introduces configurable WebUI host, and significantly expands unit test coverage for new core/quality components.

Sequence diagram for get_reviewed_persona_updates WebUI aggregation

sequenceDiagram
    actor User
    participant Browser as Browser
    participant PersonaReviewsAPI as PersonaReviewsAPI
    participant PersonaReviewService as PersonaReviewService
    participant PersonaUpdater as PersonaUpdater
    participant DatabaseManager as DatabaseManager

    User->>Browser: Open reviewed persona updates page
    Browser->>PersonaReviewsAPI: GET /persona/reviews/reviewed
    PersonaReviewsAPI->>PersonaReviewService: get_reviewed_persona_updates(limit, offset, status_filter)

    PersonaReviewService->>PersonaReviewService: reviewed_updates = []

    alt persona_updater available
        PersonaReviewService->>PersonaUpdater: get_reviewed_persona_updates(limit, offset, status_filter)
        PersonaUpdater-->>PersonaReviewService: traditional_updates
        alt traditional_updates not empty
            PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(traditional_updates)
        else
            PersonaReviewService->>PersonaReviewService: skip extend
        end
    else persona_updater missing or raises
        PersonaReviewService->>PersonaReviewService: log warning and continue
    end

    alt database_manager available for persona learning
        PersonaReviewService->>DatabaseManager: get_reviewed_persona_learning_updates(limit, offset, status_filter)
        DatabaseManager-->>PersonaReviewService: persona_learning_updates
        alt persona_learning_updates not empty
            PersonaReviewService->>PersonaReviewService: prefix id with persona_learning_
            PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(persona_learning_updates)
        else
            PersonaReviewService->>PersonaReviewService: skip extend
        end
    else error fetching persona learning
        PersonaReviewService->>PersonaReviewService: log warning and continue
    end

    alt database_manager available for style learning
        PersonaReviewService->>DatabaseManager: get_reviewed_style_learning_updates(limit, offset, status_filter)
        DatabaseManager-->>PersonaReviewService: style_updates
        alt style_updates not empty
            PersonaReviewService->>PersonaReviewService: normalize fields to unified frontend format
            PersonaReviewService->>PersonaReviewService: prefix id with style_
            PersonaReviewService->>PersonaReviewService: set review_source = style_learning
            PersonaReviewService->>PersonaReviewService: reviewed_updates.extend(style_updates)
        else
            PersonaReviewService->>PersonaReviewService: skip extend
        end
    else error fetching style learning
        PersonaReviewService->>PersonaReviewService: log warning and continue
    end

    PersonaReviewService->>PersonaReviewService: filter out entries with missing or None id
    PersonaReviewService->>PersonaReviewService: sort reviewed_updates by review_time desc
    PersonaReviewService-->>PersonaReviewsAPI: { success: True, data: reviewed_updates }
    PersonaReviewsAPI-->>Browser: JSON response
    Browser-->>User: Render reviewed persona updates list
Loading

Sequence diagram for admin commands with bootstrap null guards

sequenceDiagram
    actor Admin
    participant BotPlatform as BotPlatform
    participant MainPlugin as MainPlugin
    participant CommandHandlers as CommandHandlers

    Admin->>BotPlatform: Send command (e.g. /learning_status)
    BotPlatform->>MainPlugin: learning_status_command(event)

    alt _command_handlers is None (bootstrap failed)
        MainPlugin-->>BotPlatform: yield plain_result("插件服务未就绪,请检查启动日志")
        BotPlatform-->>Admin: Show friendly error
    else _command_handlers initialized
        MainPlugin->>CommandHandlers: learning_status(event)
        CommandHandlers-->>MainPlugin: async results
        MainPlugin-->>BotPlatform: yield results
        BotPlatform-->>Admin: Show command output
    end
Loading

Entity relationship diagram for psychological state, history, filtered messages, and jargon

erDiagram
    COMPOSITE_PSYCHOLOGICAL_STATE {
        int id
        string group_id
        string user_id
        string overall_state
        float overall_intensity
        int created_at
    }

    PSYCHOLOGICAL_STATE_COMPONENT {
        int id
        int composite_state_id
        string group_id
        string state_id
        string category
        string state_type
        float value
        float threshold
        float start_time
        string description
    }

    PSYCHOLOGICAL_STATE_HISTORY {
        int id
        string group_id
        string state_id
        string category
        string old_state_type
        string new_state_type
        float old_value
        float new_value
        string change_reason
        int timestamp
    }

    FILTERED_MESSAGE {
        int id
        int raw_message_id
        string message
        string sender_id
        string group_id
        int timestamp
        float confidence
        string filter_reason
    }

    JARGON {
        int id
        string content
        string meaning_json
        bool is_jargon
        int count
    }

    COMPOSITE_PSYCHOLOGICAL_STATE ||--o{ PSYCHOLOGICAL_STATE_COMPONENT : has_components
    COMPOSITE_PSYCHOLOGICAL_STATE ||--o{ PSYCHOLOGICAL_STATE_HISTORY : has_history
Loading

Class diagram for enhanced psychological state management and progressive learning

classDiagram
    class PsychologicalRepository {
        +async get_components(state_id int) List~PsychologicalStateComponent~
        +async update_component(state_id int, component_name str, value float, threshold float, group_id str, state_id_str str) PsychologicalStateComponent
        +async add_history(state_id int, from_state str, to_state str, trigger_event str, intensity_change float, group_id str, category str) PsychologicalStateHistory
        +async find_many(composite_state_id int) List~PsychologicalStateComponent~
        +async find_one(composite_state_id int, category str) PsychologicalStateComponent
        +async create(...)
        +async update(component PsychologicalStateComponent) PsychologicalStateComponent
    }

    class PsychologicalStateComponent {
        +str category
        +str state_type
        +float value
        +float threshold
        +str description
        +float start_time
    }

    class PsychologicalStateHistory {
        +str group_id
        +str state_id
        +str category
        +str old_state_type
        +str new_state_type
        +float old_value
        +float new_value
        +str change_reason
        +int timestamp
    }

    class CompositePsychologicalState {
        +str group_id
        +str user_id
        +str overall_state
        +float overall_intensity
        +List~PsychologicalStateComponent~ components
    }

    class EnhancedPsychologicalStateManager {
        -logger
        +async get_current_state(group_id str, user_id str) CompositePsychologicalState
        +async update_state(group_id str, user_id str, dimension str, new_value float, trigger_event str)
    }

    class ProgressiveLearning {
        -message_collector MessageCollector
        +async _execute_learning_batch(group_id str, relearn_mode bool)
    }

    class MessageCollector {
        +async add_filtered_message(data dict)
    }

    EnhancedPsychologicalStateManager --> PsychologicalRepository : uses
    EnhancedPsychologicalStateManager o--> CompositePsychologicalState : builds
    CompositePsychologicalState o--> PsychologicalStateComponent : aggregates
    PsychologicalRepository --> PsychologicalStateComponent : persists
    PsychologicalRepository --> PsychologicalStateHistory : persists

    ProgressiveLearning --> MessageCollector : saves_filtered_messages
    MessageCollector --> FILTERED_MESSAGE : writes
Loading

File-Level Changes

Change Details Files
Strengthened MySQL and generic DB session lifecycle plus SQLite concurrency behavior to avoid connection/transaction corruption.
  • Refactored MySQL database-existence check to use aiomysql with explicit ssl=False, connection timeouts, and clearer logging/exception handling, including committing created databases and ensuring connections are always closed in finally blocks.
  • Updated async session context managers in the database manager/facade to rely on async-with transactional scopes without manually closing sessions in finally, while validating engine initialization.
  • Switched SQLite async engine from StaticPool to NullPool, with updated connect_args and MySQL engine connect_args now explicitly passing ssl=False.
services/database/sqlalchemy_database_manager.py
services/database/facades/_base.py
core/database/engine.py
Aligned psychological state ORM mappings and conversion logic with the enhanced state model, including history tracking and defensive loading.
  • Changed psychological component queries and updates to use composite_state_id/category fields and added group_id/state_id string handling plus adjusted default thresholds/timestamps.
  • Reworked history creation to populate new schema fields (group_id, state_id string, category, old/new state types and values, change_reason).
  • Updated enhanced psychological state manager to defensively map ORM objects to dataclasses via getattr with sane defaults and added group_id/state_id when updating components and adding history; also logs component conversion failures instead of crashing.
repositories/psychological_repository.py
services/state/enhanced_psychological_state_manager.py
Improved WebUI persona/style review flows and API robustness, including safer JSON handling and unified style-learning record format.
  • Wrapped persona updater and DB persona/style review fetches in try/except with warnings, prefixed persona-learning IDs, normalized style-learning review records to the unified frontend schema, filtered invalid entries, and made review-time sorting null-safe.
  • Hardened persona review blueprints by validating JSON bodies, handling missing request data, and returning explicit 400 errors for malformed batch operations, while allowing optional bodies for revert endpoints.
  • Changed WebUI default persona selection to consistently use the global default persona (get_default_persona_v3(None)) instead of a random UMO-derived group persona.
webui/services/persona_review_service.py
webui/blueprints/persona_reviews.py
persona_web_manager.py
Eliminated plugin uninstall/shutdown performance regressions and added defensive lifecycle guards for background tasks and command handlers.
  • Removed expensive gc.collect() calls from WebUI server/manager stop paths that previously caused long CPU spikes during plugin uninstall or shutdown.
  • Added null guards for main plugin command entrypoints to emit friendly messages when bootstrap or underlying command handlers are unavailable instead of raising attribute errors.
  • Added extra null checks in command handlers for affection and temporary persona updater components before accessing them.
webui/manager.py
webui/server.py
main.py
services/commands/handlers.py
Enhanced data integrity for jargon and batch-learning statistics, making stored meanings and filtered messages consistent and queryable.
  • Normalized jargon meaning handling by JSON-serializing dict/list meanings to strings both at inference-time and during facade updates, ensuring DB columns remain scalar.
  • Extended jargon group facade responses to include additional group identity fields (group_id/group_name/id) while guarding against None values.
  • Persisted filtered messages during batch learning into the FilteredMessage store with relevant metadata (raw_message_id, text, sender/group IDs, timestamp, confidence, filter_reason) without blocking the learning path on failures.
services/jargon/jargon_miner.py
services/database/facades/jargon_facade.py
services/core_learning/progressive_learning.py
Refined WebUI metrics and configuration behavior for more accurate dashboards and flexible deployment.
  • Adjusted response-speed metric calculations in the dashboard JS to treat missing LLM data as a neutral score (50) rather than a misleading 0, while keeping the existing computation when models are present.
  • Added a configurable web_interface_host option (default 0.0.0.0) and documented it in the changelog and config tests, allowing non-default bind addresses.
  • Updated persona learning style statistics path to prefer a facade-level get_style_learning_statistics API and fall back to repository-based computation only when necessary.
web_res/static/js/macos/apps/Dashboard.js
web_res/static/js/script.js
webui/services/learning_service.py
CHANGELOG.md
Improved LLM adapter robustness and mood persistence behavior to better handle edge cases with providers and DB drivers.
  • Ensured filter/refine/reinforce chat-completion calls normalize contexts=None to empty lists to prevent provider-side len(None) errors.
  • Relaxed BotMood repository save refresh semantics to tolerate drivers that cannot refresh after commit while still returning a valid persisted object.
  • Updated response-speed and quality monitoring hooks to treat missing LLM stats neutrally rather than as failures.
core/framework_llm_adapter.py
repositories/bot_mood_repository.py
Expanded core quality/security/config/patterns/cache test coverage and supporting utilities for JSON parsing and security.
  • Added comprehensive unit tests for learning quality monitoring, tiered learning trigger, cache manager, guardrails Pydantic models, interfaces, design patterns (AsyncServiceBase, strategies, service registry), constants normalization, and the command/quality infrastructure.
  • Introduced extensive JSON utility tests for stripping thinking tags, cleaning markdown/control chars, extracting/fixing JSON, LLM-provider detection, and schema validation, plus security utils tests for hashing, login attempt tracking, password strength, sanitization, and migration helpers.
  • Extended pytest and coverage configuration to include core, config, utils, services modules and registered new markers, and bumped version metadata/docs to Next-2.0.6 including README sponsorship section.
tests/unit/test_learning_quality_monitor.py
tests/unit/test_patterns.py
tests/unit/test_tiered_learning_trigger.py
tests/unit/test_json_utils.py
tests/unit/test_config.py
tests/unit/test_security_utils.py
tests/unit/test_guardrails_models.py
tests/unit/test_cache_manager.py
tests/unit/test_interfaces.py
tests/unit/test_constants.py
tests/unit/test_exceptions.py
pytest.ini
README.md
README_EN.md
metadata.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • In _ensure_mysql_database_exists, the aiomysql import was moved outside the try/except so an ImportError will now bypass your logging and error message; consider keeping the import inside the try block (or adding a dedicated handler) so missing-driver issues are surfaced consistently.
  • In progressive_learning._execute_learning_batch, the add_filtered_message call is wrapped in except Exception: pass, which will silently hide persistent DB/logic errors; consider at least logging these exceptions at debug/warning level to aid diagnosis while still keeping the best-effort behavior.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_ensure_mysql_database_exists`, the `aiomysql` import was moved outside the try/except so an ImportError will now bypass your logging and error message; consider keeping the import inside the try block (or adding a dedicated handler) so missing-driver issues are surfaced consistently.
- In `progressive_learning._execute_learning_batch`, the `add_filtered_message` call is wrapped in `except Exception: pass`, which will silently hide persistent DB/logic errors; consider at least logging these exceptions at debug/warning level to aid diagnosis while still keeping the best-effort behavior.

## Individual Comments

### Comment 1
<location path="services/database/facades/_base.py" line_range="35-41" />
<code_context>
+        使用 ``async with session`` 确保事务完整性,
+        不再在 finally 中重复调用 close 以避免连接状态异常。
+        """
+        if self.engine is None or self.engine.engine is None:
+            raise RuntimeError("数据库引擎未初始化或已关闭")
         session = self.engine.get_session()
         try:
             async with session:
                 yield session
-        finally:
-            await session.close()
+        except Exception:
+            raise

</code_context>
<issue_to_address>
**suggestion:** The try/except that only re-raises adds complexity without benefit and can be removed.

Since `async with session:` already manages commit/rollback/close via the context manager, the surrounding `try/except Exception: raise` is redundant and hurts readability. You can simplify to:

```python
if self.engine is None or self.engine.engine is None:
    raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
    yield session
```

This preserves the engine check and lets exceptions propagate naturally.

Suggested implementation:

```python
    @asynccontextmanager
    async def get_session(self):
        """获取异步数据库会话(上下文管理器)

        自动处理会话的创建、提交和回滚。
        使用 ``async with session`` 确保事务完整性,
        不再在 finally 中重复调用 close 以避免连接状态异常。
        """
        if self.engine is None or self.engine.engine is None:
            raise RuntimeError("数据库引擎未初始化或已关闭")
        session = self.engine.get_session()
        async with session:
            yield session

```

If there are other places in this file or project that relied on the old behavior of catching and re-raising exceptions (for logging or custom handling), verify that they don't expect that behavior here. Since the original `except Exception: raise` was a no-op, this should be a safe, behavior-preserving simplification.
</issue_to_address>

### Comment 2
<location path="services/database/sqlalchemy_database_manager.py" line_range="225" />
<code_context>

     @asynccontextmanager
     async def get_session(self):
-        """获取异步数据库会话(上下文管理器)"""
+        """获取异步数据库会话(上下文管理器)
+
+        自动处理会话的创建、提交和回滚。
+        使用 ``async with session`` 确保事务完整性,
+        不再在 finally 中重复调用 close 以避免连接状态异常。
+        """
+        if self.engine is None or self.engine.engine is None:
+            raise RuntimeError("数据库引擎未初始化或已关闭")
         session = self.engine.get_session()
         try:
             async with session:
                 yield session
-        finally:
-            await session.close()
+        except Exception:
+            raise

</code_context>
<issue_to_address>
**suggestion:** Similar to the facade, the try/except that only re-raises in get_session can be removed to simplify control flow.

The session context manager already manages the transaction lifecycle and will propagate exceptions to callers. You can rely on `async with session: yield session` without the redundant `try/except` wrapper.
</issue_to_address>

### Comment 3
<location path="services/core_learning/progressive_learning.py" line_range="255-264" />
<code_context>
+            saved_count = 0
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Swallowing all exceptions when saving filtered messages hides potential systemic issues; consider at least logging them.

Given the best-effort semantics, it’s fine not to fail the whole flow, but a bare `except Exception: pass` makes persistent issues (schema changes, connectivity problems) undetectable. Consider catching `Exception` but logging at debug/warning level so repeated failures in the filtered-message path are visible while keeping the operation non-blocking.

Suggested implementation:

```python
            saved_count = 0
            for msg in filtered_messages:
                try:
                    await self.message_collector.add_filtered_message({
                        "raw_message_id": msg.get("id"),
                        "message": msg.get("message", ""),
                        "sender_id": msg.get("sender_id", ""),
                        "group_id": msg.get("group_id", group_id),
                        "timestamp": msg.get("timestamp", int(time.time())),
                        "confidence": msg.get("relevance_score", 1.0),
                        "filter_reason": msg.get("filter_reason", "batch_learning"),

```

```python
                except Exception:
                    logger.warning(
                        "保存筛选消息失败 raw_message_id=%s,已跳过,不影响整体流程",
                        msg.get("id"),
                        exc_info=True,
                    )

```
</issue_to_address>

### Comment 4
<location path="tests/unit/test_config.py" line_range="34" />
<code_context>
+        assert config.enable_realtime_learning is False
+        assert config.enable_web_interface is True
+        assert config.web_interface_port == 7833
+        assert config.web_interface_host == "0.0.0.0"
+
+    def test_create_default_classmethod(self):
</code_context>
<issue_to_address>
**suggestion (testing):** Add a config-driven test for `web_interface_host` to cover the new option end-to-end

This suite only checks the default `web_interface_host`. Since it’s now configurable, please add a `create_from_config` test that sets `Self_Learning_Basic['web_interface_host']` (e.g. `'127.0.0.1'`) and asserts the value is honored. This will cover the config parsing path and help prevent regressions as the config schema changes.

Suggested implementation:

```python
    def test_create_default_classmethod(self):
        """Test the create_default classmethod."""
        config = PluginConfig.create_default()

        assert isinstance(config, PluginConfig)
        assert config.learning_interval_hours == 6
        assert config.min_messages_for_learning == 50
        assert config.max_messages_per_batch == 200

    def test_create_from_config_web_interface_host(self):
        """Test that web_interface_host is loaded from config."""
        # Arrange: build a config section with an overridden web_interface_host
        config_parser = configparser.ConfigParser()
        config_parser["Self_Learning_Basic"] = {
            "web_interface_host": "127.0.0.1",
        }

        # Act: create PluginConfig from the config section
        config = PluginConfig.create_from_config(
            config_parser["Self_Learning_Basic"]
        )

        # Assert: the configured host is honored
        assert config.web_interface_host == "127.0.0.1"

    def test_default_learning_parameters(self):
        """Test default learning parameter values."""

```

1. Ensure `configparser` is imported at the top of `tests/unit/test_config.py`, e.g. `import configparser`, or reuse any existing config/fixture helper already present in this test module to construct the `"Self_Learning_Basic"` section instead of instantiating `ConfigParser` directly.
2. If `PluginConfig.create_from_config` expects a full `ConfigParser` rather than a section mapping, adjust the call accordingly, e.g. pass the entire `config_parser` or use the existing calling convention already used elsewhere in the test file.
</issue_to_address>

### Comment 5
<location path="tests/unit/test_interfaces.py" line_range="244" />
<code_context>
+
+    def test_enum_count(self):
+        """Test the total number of analysis types."""
+        assert len(AnalysisType) == 5
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding unit tests for the new admin command null-guards to prove the user-facing behavior

These changes add null-guards to all 6 admin commands (e.g. `learning_status_command`, `start_learning_command`, `set_mood_command`) so they return a friendly "插件服务未就绪"-style message when `_command_handlers` is `None`, but this behavior isn’t tested. Please add a focused test module (e.g. `tests/unit/test_admin_commands_null_guard.py`) that:
- Constructs the relevant entry/handler with `_command_handlers = None`.
- Calls each admin command and asserts the first yielded result is the expected fallback message, rather than raising or touching the handler.
This will ensure the intended UX is preserved when services fail to bootstrap.

Suggested implementation:

```python
import inspect
from typing import Any, Callable

import pytest

EXPECTED_NULL_GUARD_MESSAGE = "插件服务未就绪"


try:
    # NOTE: Adjust this import to match your real admin commands module/classes.
    # The tests will be skipped entirely if this import fails, so it will not
    # break your suite until you wire it up correctly.
    from app.admin_commands import (
        LearningStatusAdminEntry,
        StartLearningAdminEntry,
        SetMoodAdminEntry,
        StopLearningAdminEntry,
        ResetConversationAdminEntry,
        DebugInfoAdminEntry,
    )
except ImportError:  # pragma: no cover
    pytest.skip("Admin command implementations not available for null-guard tests.", allow_module_level=True)


async def _get_first_yielded(command_callable: Callable[..., Any]) -> Any:
    """Invoke a command and return the first yielded result, supporting sync/async generators."""
    result = command_callable()

    # Async generator: e.g. async def foo(...): yield ...
    if inspect.isasyncgen(result):
        return await result.__anext__()

    # Sync generator: e.g. def foo(...): yield ...
    if inspect.isgenerator(result):
        return next(result)

    # Fall back: function returns a plain value instead of a generator
    return result


@pytest.mark.asyncio
async def test_learning_status_command_null_guard() -> None:
    entry = LearningStatusAdminEntry()
    # Force the null-guard path
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.learning_status_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_start_learning_command_null_guard() -> None:
    entry = StartLearningAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.start_learning_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_set_mood_command_null_guard() -> None:
    entry = SetMoodAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.set_mood_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_stop_learning_command_null_guard() -> None:
    entry = StopLearningAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.stop_learning_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_reset_conversation_command_null_guard() -> None:
    entry = ResetConversationAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.reset_conversation_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_debug_info_command_null_guard() -> None:
    entry = DebugInfoAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.debug_info_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE

```

To wire these tests into your actual codebase, you’ll need to:
1. Update the `from app.admin_commands import ...` statement to point at the real module and classes that expose the 6 admin commands with the `_command_handlers` attribute.
2. Adjust the class names (`LearningStatusAdminEntry`, `StartLearningAdminEntry`, etc.) to match your actual entry/handler types.
3. Ensure the method names (`learning_status_command`, `start_learning_command`, `set_mood_command`, `stop_learning_command`, `reset_conversation_command`, `debug_info_command`) match your implementations.
4. Confirm the expected null-guard message; if your implementation uses a slightly different string than `"插件服务未就绪"`, update `EXPECTED_NULL_GUARD_MESSAGE` accordingly.
5. If your commands are strictly synchronous (not async generators), you can drop `pytest.mark.asyncio` and simplify `_get_first_yielded` to only handle sync generators.
</issue_to_address>

### Comment 6
<location path="tests/unit/test_json_utils.py" line_range="243" />
<code_context>
+
+@pytest.mark.unit
+@pytest.mark.utils
+class TestSafeParseLlmJson:
+    """Test end-to-end safe JSON parsing."""
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add WebUI API tests for persona review endpoints to cover the new request-body validation paths

The new `request.get_json()` handling in `webui/blueprints/persona_reviews.py` for `review_persona_update`, the `batch_*` routes, and `revert_persona_update` isn’t currently covered by tests. Please add async WebUI API tests (e.g. `tests/unit/test_persona_reviews_api.py`) using the test client to verify:
- `POST /review_persona_update/<id>` returns 400 with `{ "error": "Request body is required" }` when the JSON body is missing.
- `POST /batch_delete_persona_updates` and `POST /batch_review_persona_updates` show the same behavior for missing or malformed bodies.
- `POST /revert_persona_update/<id>` accepts an empty body and still uses the default reason.
This will validate the crash fix and request-body validation paths end-to-end.

Suggested implementation:

```python
import pytest


@pytest.mark.unit
@pytest.mark.asyncio
class TestPersonaReviewRequestBodyValidation:
    async def test_review_persona_update_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /review_persona_update/<id> returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/review_persona_update/123")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_delete_persona_updates_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_delete_persona_updates returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/batch_delete_persona_updates")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_review_persona_updates_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_review_persona_updates returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/batch_review_persona_updates")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_delete_persona_updates_malformed_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_delete_persona_updates returns 400 for malformed JSON body."""
        # Send a body that is not valid JSON or not a valid object
        response = await webui_api_client.post(
            "/batch_delete_persona_updates",
            data="this is not json",
            headers={"Content-Type": "application/json"},
        )

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_review_persona_updates_malformed_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_review_persona_updates returns 400 for malformed JSON body."""
        response = await webui_api_client.post(
            "/batch_review_persona_updates",
            data="this is not json",
            headers={"Content-Type": "application/json"},
        )

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_revert_persona_update_allows_empty_body_and_uses_default_reason(
        self, webui_api_client
    ):
        """
        POST /revert_persona_update/<id> accepts an empty body and still uses the
        default reason on the server side.
        """
        # No JSON body at all
        response = await webui_api_client.post("/revert_persona_update/123")

        # The exact status code and response payload depend on the implementation;
        # we only assert that it does not fail request-body validation.
        assert response.status_code != 400
        data = await response.get_json()
        # The handler should have used the default reason internally; if the API
        # echoes it back, we can assert on that key. Adjust as needed:
        # assert data.get("reason") == DEFAULT_REVERT_REASON

```

1. Replace the `webui_api_client` fixture name with the actual async WebUI test client fixture used in your test suite (for example, `async_client`, `client`, or similar). Ensure that this fixture returns an object with `post(...)`, `status_code`, and `get_json()` (or equivalent) async methods.
2. If your project does not use `pytest-asyncio` and instead wraps async tests differently, adjust the `@pytest.mark.asyncio` marker and method definitions accordingly (e.g., make tests sync and use a sync Flask test client).
3. Align endpoint paths (`/review_persona_update/<id>`, `/batch_delete_persona_updates`, `/batch_review_persona_updates`, `/revert_persona_update/<id>`) and the exact error payload (`{"error": "Request body is required"}`) with the actual routes and error messages in `webui/blueprints/persona_reviews.py`.
4. For `test_revert_persona_update_allows_empty_body_and_uses_default_reason`, if the API explicitly returns the default reason in the response, replace the commented assertion with a concrete check against the actual response schema (e.g., a `reason` or `message` field).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +35 to +41
if self.engine is None or self.engine.engine is None:
raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
try:
async with session:
yield session
finally:
await session.close()
except Exception:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: The try/except that only re-raises adds complexity without benefit and can be removed.

Since async with session: already manages commit/rollback/close via the context manager, the surrounding try/except Exception: raise is redundant and hurts readability. You can simplify to:

if self.engine is None or self.engine.engine is None:
    raise RuntimeError("数据库引擎未初始化或已关闭")
session = self.engine.get_session()
async with session:
    yield session

This preserves the engine check and lets exceptions propagate naturally.

Suggested implementation:

    @asynccontextmanager
    async def get_session(self):
        """获取异步数据库会话(上下文管理器)

        自动处理会话的创建、提交和回滚。
        使用 ``async with session`` 确保事务完整性,
        不再在 finally 中重复调用 close 以避免连接状态异常。
        """
        if self.engine is None or self.engine.engine is None:
            raise RuntimeError("数据库引擎未初始化或已关闭")
        session = self.engine.get_session()
        async with session:
            yield session

If there are other places in this file or project that relied on the old behavior of catching and re-raising exceptions (for logging or custom handling), verify that they don't expect that behavior here. Since the original except Exception: raise was a no-op, this should be a safe, behavior-preserving simplification.

yield session
finally:
await session.close()
except Exception:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Similar to the facade, the try/except that only re-raises in get_session can be removed to simplify control flow.

The session context manager already manages the transaction lifecycle and will propagate exceptions to callers. You can rely on async with session: yield session without the redundant try/except wrapper.

Comment on lines +255 to +264
saved_count = 0
for msg in filtered_messages:
try:
await self.message_collector.add_filtered_message({
"raw_message_id": msg.get("id"),
"message": msg.get("message", ""),
"sender_id": msg.get("sender_id", ""),
"group_id": msg.get("group_id", group_id),
"timestamp": msg.get("timestamp", int(time.time())),
"confidence": msg.get("relevance_score", 1.0),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Swallowing all exceptions when saving filtered messages hides potential systemic issues; consider at least logging them.

Given the best-effort semantics, it’s fine not to fail the whole flow, but a bare except Exception: pass makes persistent issues (schema changes, connectivity problems) undetectable. Consider catching Exception but logging at debug/warning level so repeated failures in the filtered-message path are visible while keeping the operation non-blocking.

Suggested implementation:

            saved_count = 0
            for msg in filtered_messages:
                try:
                    await self.message_collector.add_filtered_message({
                        "raw_message_id": msg.get("id"),
                        "message": msg.get("message", ""),
                        "sender_id": msg.get("sender_id", ""),
                        "group_id": msg.get("group_id", group_id),
                        "timestamp": msg.get("timestamp", int(time.time())),
                        "confidence": msg.get("relevance_score", 1.0),
                        "filter_reason": msg.get("filter_reason", "batch_learning"),
                except Exception:
                    logger.warning(
                        "保存筛选消息失败 raw_message_id=%s,已跳过,不影响整体流程",
                        msg.get("id"),
                        exc_info=True,
                    )

assert config.enable_realtime_learning is False
assert config.enable_web_interface is True
assert config.web_interface_port == 7833
assert config.web_interface_host == "0.0.0.0"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a config-driven test for web_interface_host to cover the new option end-to-end

This suite only checks the default web_interface_host. Since it’s now configurable, please add a create_from_config test that sets Self_Learning_Basic['web_interface_host'] (e.g. '127.0.0.1') and asserts the value is honored. This will cover the config parsing path and help prevent regressions as the config schema changes.

Suggested implementation:

    def test_create_default_classmethod(self):
        """Test the create_default classmethod."""
        config = PluginConfig.create_default()

        assert isinstance(config, PluginConfig)
        assert config.learning_interval_hours == 6
        assert config.min_messages_for_learning == 50
        assert config.max_messages_per_batch == 200

    def test_create_from_config_web_interface_host(self):
        """Test that web_interface_host is loaded from config."""
        # Arrange: build a config section with an overridden web_interface_host
        config_parser = configparser.ConfigParser()
        config_parser["Self_Learning_Basic"] = {
            "web_interface_host": "127.0.0.1",
        }

        # Act: create PluginConfig from the config section
        config = PluginConfig.create_from_config(
            config_parser["Self_Learning_Basic"]
        )

        # Assert: the configured host is honored
        assert config.web_interface_host == "127.0.0.1"

    def test_default_learning_parameters(self):
        """Test default learning parameter values."""
  1. Ensure configparser is imported at the top of tests/unit/test_config.py, e.g. import configparser, or reuse any existing config/fixture helper already present in this test module to construct the "Self_Learning_Basic" section instead of instantiating ConfigParser directly.
  2. If PluginConfig.create_from_config expects a full ConfigParser rather than a section mapping, adjust the call accordingly, e.g. pass the entire config_parser or use the existing calling convention already used elsewhere in the test file.


def test_enum_count(self):
"""Test the total number of analysis types."""
assert len(AnalysisType) == 5
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider adding unit tests for the new admin command null-guards to prove the user-facing behavior

These changes add null-guards to all 6 admin commands (e.g. learning_status_command, start_learning_command, set_mood_command) so they return a friendly "插件服务未就绪"-style message when _command_handlers is None, but this behavior isn’t tested. Please add a focused test module (e.g. tests/unit/test_admin_commands_null_guard.py) that:

  • Constructs the relevant entry/handler with _command_handlers = None.
  • Calls each admin command and asserts the first yielded result is the expected fallback message, rather than raising or touching the handler.
    This will ensure the intended UX is preserved when services fail to bootstrap.

Suggested implementation:

import inspect
from typing import Any, Callable

import pytest

EXPECTED_NULL_GUARD_MESSAGE = "插件服务未就绪"


try:
    # NOTE: Adjust this import to match your real admin commands module/classes.
    # The tests will be skipped entirely if this import fails, so it will not
    # break your suite until you wire it up correctly.
    from app.admin_commands import (
        LearningStatusAdminEntry,
        StartLearningAdminEntry,
        SetMoodAdminEntry,
        StopLearningAdminEntry,
        ResetConversationAdminEntry,
        DebugInfoAdminEntry,
    )
except ImportError:  # pragma: no cover
    pytest.skip("Admin command implementations not available for null-guard tests.", allow_module_level=True)


async def _get_first_yielded(command_callable: Callable[..., Any]) -> Any:
    """Invoke a command and return the first yielded result, supporting sync/async generators."""
    result = command_callable()

    # Async generator: e.g. async def foo(...): yield ...
    if inspect.isasyncgen(result):
        return await result.__anext__()

    # Sync generator: e.g. def foo(...): yield ...
    if inspect.isgenerator(result):
        return next(result)

    # Fall back: function returns a plain value instead of a generator
    return result


@pytest.mark.asyncio
async def test_learning_status_command_null_guard() -> None:
    entry = LearningStatusAdminEntry()
    # Force the null-guard path
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.learning_status_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_start_learning_command_null_guard() -> None:
    entry = StartLearningAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.start_learning_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_set_mood_command_null_guard() -> None:
    entry = SetMoodAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.set_mood_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_stop_learning_command_null_guard() -> None:
    entry = StopLearningAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.stop_learning_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_reset_conversation_command_null_guard() -> None:
    entry = ResetConversationAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.reset_conversation_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE


@pytest.mark.asyncio
async def test_debug_info_command_null_guard() -> None:
    entry = DebugInfoAdminEntry()
    entry._command_handlers = None  # type: ignore[attr-defined]

    first = await _get_first_yielded(entry.debug_info_command)
    assert first == EXPECTED_NULL_GUARD_MESSAGE

To wire these tests into your actual codebase, you’ll need to:

  1. Update the from app.admin_commands import ... statement to point at the real module and classes that expose the 6 admin commands with the _command_handlers attribute.
  2. Adjust the class names (LearningStatusAdminEntry, StartLearningAdminEntry, etc.) to match your actual entry/handler types.
  3. Ensure the method names (learning_status_command, start_learning_command, set_mood_command, stop_learning_command, reset_conversation_command, debug_info_command) match your implementations.
  4. Confirm the expected null-guard message; if your implementation uses a slightly different string than "插件服务未就绪", update EXPECTED_NULL_GUARD_MESSAGE accordingly.
  5. If your commands are strictly synchronous (not async generators), you can drop pytest.mark.asyncio and simplify _get_first_yielded to only handle sync generators.


@pytest.mark.unit
@pytest.mark.utils
class TestSafeParseLlmJson:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add WebUI API tests for persona review endpoints to cover the new request-body validation paths

The new request.get_json() handling in webui/blueprints/persona_reviews.py for review_persona_update, the batch_* routes, and revert_persona_update isn’t currently covered by tests. Please add async WebUI API tests (e.g. tests/unit/test_persona_reviews_api.py) using the test client to verify:

  • POST /review_persona_update/<id> returns 400 with { "error": "Request body is required" } when the JSON body is missing.
  • POST /batch_delete_persona_updates and POST /batch_review_persona_updates show the same behavior for missing or malformed bodies.
  • POST /revert_persona_update/<id> accepts an empty body and still uses the default reason.
    This will validate the crash fix and request-body validation paths end-to-end.

Suggested implementation:

import pytest


@pytest.mark.unit
@pytest.mark.asyncio
class TestPersonaReviewRequestBodyValidation:
    async def test_review_persona_update_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /review_persona_update/<id> returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/review_persona_update/123")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_delete_persona_updates_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_delete_persona_updates returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/batch_delete_persona_updates")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_review_persona_updates_missing_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_review_persona_updates returns 400 when JSON body is missing."""
        response = await webui_api_client.post("/batch_review_persona_updates")

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_delete_persona_updates_malformed_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_delete_persona_updates returns 400 for malformed JSON body."""
        # Send a body that is not valid JSON or not a valid object
        response = await webui_api_client.post(
            "/batch_delete_persona_updates",
            data="this is not json",
            headers={"Content-Type": "application/json"},
        )

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_batch_review_persona_updates_malformed_body_returns_400(
        self, webui_api_client
    ):
        """POST /batch_review_persona_updates returns 400 for malformed JSON body."""
        response = await webui_api_client.post(
            "/batch_review_persona_updates",
            data="this is not json",
            headers={"Content-Type": "application/json"},
        )

        assert response.status_code == 400
        data = await response.get_json()
        assert data == {"error": "Request body is required"}

    async def test_revert_persona_update_allows_empty_body_and_uses_default_reason(
        self, webui_api_client
    ):
        """
        POST /revert_persona_update/<id> accepts an empty body and still uses the
        default reason on the server side.
        """
        # No JSON body at all
        response = await webui_api_client.post("/revert_persona_update/123")

        # The exact status code and response payload depend on the implementation;
        # we only assert that it does not fail request-body validation.
        assert response.status_code != 400
        data = await response.get_json()
        # The handler should have used the default reason internally; if the API
        # echoes it back, we can assert on that key. Adjust as needed:
        # assert data.get("reason") == DEFAULT_REVERT_REASON
  1. Replace the webui_api_client fixture name with the actual async WebUI test client fixture used in your test suite (for example, async_client, client, or similar). Ensure that this fixture returns an object with post(...), status_code, and get_json() (or equivalent) async methods.
  2. If your project does not use pytest-asyncio and instead wraps async tests differently, adjust the @pytest.mark.asyncio marker and method definitions accordingly (e.g., make tests sync and use a sync Flask test client).
  3. Align endpoint paths (/review_persona_update/<id>, /batch_delete_persona_updates, /batch_review_persona_updates, /revert_persona_update/<id>) and the exact error payload ({"error": "Request body is required"}) with the actual routes and error messages in webui/blueprints/persona_reviews.py.
  4. For test_revert_persona_update_allows_empty_body_and_uses_default_reason, if the API explicitly returns the default reason in the response, replace the commented assertion with a concrete check against the actual response schema (e.g., a reason or message field).

@NickCharlie NickCharlie merged commit d0ecc02 into main Mar 2, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant