Conversation
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
Signed-off-by: Jordan Dubrick <jdubrick@redhat.com>
|
fyi @yangcao77 @tisnik |
WalkthroughAdds profile-driven system prompt support. Introduces a CustomProfile model, profile validation/loading utilities, a new RHDH profile, and updates system prompt resolution to prioritize profile defaults after query-level prompts. Expands README with multiple configuration avenues and updates tests to cover profile loading and prompt precedence. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant App as AppConfig
participant Cfg as Customization
participant Prof as CustomProfile
participant Checks as utils.checks
participant Mods as utils.profiles...profile
User->>App: load_configuration(config.yaml)
App->>Cfg: instantiate with fields
alt profile_name provided
Cfg->>Prof: __init__(name=profile_name)
Prof->>Checks: profile_check(profile_name)
Prof->>Checks: read_profile_file("src.utils.profiles", profile_name, logger)
Checks-->>Prof: module or None
alt module loaded
Prof->>Mods: import {profile}.profile
Mods-->>Prof: PROFILE_CONFIG
Prof->>Prof: store prompts (e.g., default)
Cfg->>Cfg: set system_prompt from profile default
else load failed
Prof-->>Cfg: prompts empty
end
else no profile_name
Cfg->>Cfg: optional file-based system_prompt load
end
App-->>User: configuration ready
sequenceDiagram
autonumber
participant EP as get_system_prompt()
participant QR as query_request
participant C as config.customization
participant P as C.custom_profile
EP->>QR: check system_prompt in request
alt query system_prompt present and not disabled
EP-->>QR: return query system_prompt
else
alt P exists and has "default"
EP-->>P: return profile default prompt
else
alt C.system_prompt set
EP-->>C: return customization system_prompt
else
EP-->>EP: return built-in default
end
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/models/config.py (1)
310-323: Avoid mutating system_prompt when a profile is present; let endpoints enforce precedenceAuto-overwriting system_prompt with the profile’s default can silently discard a configured literal/path prompt and makes the precedence split across layers. Keep profile loading here, but resolve final precedence in utils/endpoints.get_system_prompt (which already prefers the profile).
def check_customization_model(self) -> Self: - """Load customizations.""" - if self.profile_name: # custom profile overrides all - self.custom_profile = CustomProfile(name=self.profile_name) - self.system_prompt = self.custom_profile.get_prompts().get( - "default" - ) # set to default prompt from the profile - elif self.system_prompt_path is not None: + """Load customizations.""" + if self.profile_name: + self.custom_profile = CustomProfile(name=self.profile_name) + + # Preserve configured prompts; endpoints will decide precedence. + if self.system_prompt_path is not None and self.system_prompt is None: checks.file_check(self.system_prompt_path, "system prompt") self.system_prompt = checks.get_attribute_from_file( dict(self), "system_prompt_path" ) return selfThis keeps system_prompt intact (if provided) and still allows file-based loading; get_system_prompt will continue to return the profile default first.
🧹 Nitpick comments (20)
src/utils/profiles/rhdh/__init__.py (1)
1-1: Package init is fine; consider exporting the public surface.Optional: expose key symbols (e.g., PROFILE_CONFIG) via all to make discoverability/introspection easier when importing the package.
-"""Init of RHDH custom profile.""" +"""Init of RHDH custom profile.""" + +# Optional explicit exports for discoverability +__all__ = ["profile"]src/constants.py (1)
61-61: Good addition; add typing and brief comment to avoid drift.Adding an explicit frozenset is solid. Two small improvements:
- Add a type hint for better editor/static type support.
- Document intent and maintenance strategy (e.g., keep in sync with utils/profiles/*).
- CUSTOM_PROFILES = frozenset({"rhdh"}) + from typing import FrozenSet + + # Supported customization profile names. Keep in sync with utils/profiles/* packages. + CUSTOM_PROFILES: FrozenSet[str] = frozenset({"rhdh"})src/utils/profiles/rhdh/profile.py (3)
106-151: Template placeholder approach is OK; consider unifying style to reduce confusion.QUESTION_VALIDATOR_PROMPT_TEMPLATE uses an f-string with {{query}} to emit a literal {query}, whereas TOPIC_SUMMARY_PROMPT_TEMPLATE uses a plain string with {query}. Both produce a literal {query}, but the inconsistency plus the comment might confuse future editors.
Optional: remove the f-string and inject constants via format_map so both templates consistently use {query} placeholders.
-# {{query}} is escaped because it will be replaced as a parameter at time of use -QUESTION_VALIDATOR_PROMPT_TEMPLATE = f""" +# {query} is a runtime parameter; keep as a literal placeholder in the template +QUESTION_VALIDATOR_PROMPT_TEMPLATE = """ Instructions: ... -- If a question appears to be related to Backstage, RHDH, Kubernetes, Openshift, or any of their features, answer with the word {SUBJECT_ALLOWED} -... -- If a question is not related to Backstage, RHDH, Kubernetes, Openshift, or their features, answer with the word {SUBJECT_REJECTED} +-- If a question appears to be related to Backstage, RHDH, Kubernetes, OpenShift, or any of their features, answer with the word {SUBJECT_ALLOWED} +... +-- If a question is not related to Backstage, RHDH, Kubernetes, OpenShift, or their features, answer with the word {SUBJECT_REJECTED} ... Question: -{{query}} +{query} Response: -""" +""".format_map({"SUBJECT_ALLOWED": SUBJECT_ALLOWED, "SUBJECT_REJECTED": SUBJECT_REJECTED})Note: This change also fixes "Openshift" capitalization (see next comment).
109-114: Fix “Openshift” capitalization to “OpenShift”.Brand capitalization is inconsistent in the validator template. Correcting it avoids test brittleness and user-visible typos.
-- You are an expert in Backstage, Red Hat Developer Hub (RHDH), Kubernetes, Openshift, CI/CD and GitOps Pipelines +- You are an expert in Backstage, Red Hat Developer Hub (RHDH), Kubernetes, OpenShift, CI/CD and GitOps Pipelines ... -- If a question appears to be related to Backstage, RHDH, Kubernetes, Openshift, or any of their features, answer with the word {SUBJECT_ALLOWED} +- If a question appears to be related to Backstage, RHDH, Kubernetes, OpenShift, or any of their features, answer with the word {SUBJECT_ALLOWED} ... -- If a question is not related to Backstage, RHDH, Kubernetes, Openshift, or their features, answer with the word {SUBJECT_REJECTED} +- If a question is not related to Backstage, RHDH, Kubernetes, OpenShift, or their features, answer with the word {SUBJECT_REJECTED}
127-131: Correct grammar in example prompt.“create import” looks accidental. Use “import” (or “create or import”).
-Example Question: -How do I create import an existing software template in Backstage? +Example Question: +How do I import an existing software template in Backstage?src/utils/endpoints.py (2)
43-45: Update docstring to reflect new precedence including profile default.The function now considers a profile default between query request and configured system prompt.
-def get_system_prompt(query_request: QueryRequest, config: AppConfig) -> str: - """Get the system prompt: the provided one, configured one, or default one.""" +def get_system_prompt(query_request: QueryRequest, config: AppConfig) -> str: + """Resolve system prompt with precedence: query request > profile default > configuration > global default."""
62-65: Fix outdated inline comment (option name).The comment mentions disable_system_prompt but the actual option is disable_query_system_prompt.
- # disable query system prompt altogether with disable_system_prompt. + # disable query system prompt altogether with disable_query_system_prompt.src/utils/checks.py (3)
33-41: Unify exception type and normalize input to avoid case-sensitivity surprises.Raising KeyError for None is inconsistent with other validators using InvalidConfigurationError. Also, normalizing case will make config more user-friendly.
-def profile_check(profile: str | None) -> None: +def profile_check(profile: str | None) -> None: """Check that a profile exists in the list of profiles.""" - if profile is None: - raise KeyError("Missing profile_name.") - if profile not in constants.CUSTOM_PROFILES: + if profile is None: + raise InvalidConfigurationError("Missing profile_name.") + candidate = profile.strip() + if not candidate: + raise InvalidConfigurationError("Missing profile_name.") + # Case-insensitive match for robustness + candidate_lc = candidate.lower() + if candidate_lc not in constants.CUSTOM_PROFILES: raise InvalidConfigurationError( - f"Profile {profile} not present. Must be one of: {constants.CUSTOM_PROFILES}" + f"Profile '{profile}' not present. Must be one of: {constants.CUSTOM_PROFILES}" )If you adopt case-normalization here, ensure the caller also uses the normalized value when loading the module (e.g., by lowercasing profile_name before import).
43-55: Improve error logging for dynamic import failures.Current log loses context and stack trace. Including the module path and exception details will speed up debugging without changing behavior.
def read_profile_file( profile_path: str, profile_name: str, logger: Logger, ) -> ModuleType | None: """Import Python module related to a custom profile.""" try: - data = importlib.import_module(f"{profile_path}.{profile_name}.profile") - except (FileNotFoundError, ModuleNotFoundError): - logger.error("Profile .py file not found.") - data = None - return data + module_qualname = f"{profile_path}.{profile_name}.profile" + return importlib.import_module(module_qualname) + except ModuleNotFoundError as exc: + # Include path and stack for diagnostics + logger.exception("Profile module not found: %s", f"{profile_path}.{profile_name}.profile") + return None
25-31: Minor: os.access may be misleading on some platforms.Not a blocker, but note that os.access can behave unexpectedly with ACLs/capabilities. If you ever hit false negatives/positives, consider attempting an open() inside a try/except to check readability.
tests/unit/utils/test_checks.py (3)
3-3: Prefer getLogger over directly constructing Logger in testsConstructing Logger("test") bypasses common handler/level setup and can drop messages in some CI setups. Use logging.getLogger("test") for consistency and to exercise actual logging paths.
-from logging import Logger +import logging ... - logger = Logger("test") + logger = logging.getLogger("test") ... - logger = Logger("test") + logger = logging.getLogger("test")Also applies to: 97-101, 104-110
85-89: Exception type consistency: consider using a single domain-specific errorprofile_check(None) currently raises KeyError while missing/unknown profiles raise InvalidConfigurationError. Consider making both cases raise InvalidConfigurationError for a clean, predictable API surface (and update this test accordingly).
If you want, I can open a follow-up to align utils/checks.py and adjust tests.
104-110: Strengthen the positive-path assertionVerifying ModuleType is good, but asserting presence/shape of PROFILE_CONFIG (and its system_prompts.default) will better guard regressions in profile modules.
- assert isinstance(data, ModuleType) + assert isinstance(data, ModuleType) + assert hasattr(data, "PROFILE_CONFIG") + assert isinstance(data.PROFILE_CONFIG, dict) + assert "system_prompts" in data.PROFILE_CONFIG + assert "default" in data.PROFILE_CONFIG["system_prompts"]tests/unit/utils/test_endpoints.py (3)
78-86: Docstring contradicts fixture behavior (enabled vs. disabled)The fixture name sets disable_query_system_prompt=False (enabled), but the docstring says “disabled”. Fix wording to prevent confusion.
- """Configuration with custom profile loaded for prompt and disabled query system prompt set.""" + """Configuration with custom profile loaded for prompt and enabled query system prompt."""
215-226: Avoid duplicating source-of-truth for expected promptsInstead of instantiating a new CustomProfile in the test, read the prompt from the loaded configuration’s custom_profile. This keeps the test resilient to profile location/refactorings.
- custom_profile = CustomProfile("rhdh") - prompts = custom_profile.get_prompts() + prompts = ( + config_with_custom_profile_prompt_and_disable_query_system_prompt + .customization.custom_profile.get_prompts() + )
74-109: Add a test for: profile present, query has no system_prompt, and disable_query_system_prompt=FalseEdge case parity: when the query omits system_prompt and the feature isn’t disabled, get_system_prompt should still fall back to the profile default. Adding this test will mirror the disabled-path test and fully cover precedence.
I can draft the additional test if you’d like.
README.md (2)
309-315: Document explicit precedence orderMake the order of resolution unambiguous: Query > Profile default > Configured system_prompt > Default. This matches utils/endpoints.get_system_prompt.
-Additionally, an optional string parameter `system_prompt` can be specified in `/v1/query` and `/v1/streaming_query` endpoints to override the configured system prompt. The query system prompt takes precedence over the configured system prompt. You can use this config to disable query system prompts: +Precedence: +1) Query `system_prompt` (unless disabled), +2) Profile default (if `customization.profile_name` is set), +3) Configured `customization.system_prompt` (or `system_prompt_path`), +4) Default system prompt. + +You can disable query-time overrides with:
300-307: Minor YAML indentation nitFor consistency with surrounding snippets, indent the YAML under “Custom Profile”.
-customization: - profile_name: <your-profile> +customization: + profile_name: <your-profile>tests/unit/test_configuration.py (1)
479-522: Broaden assertions or trim unused options in the “all customizations” testYou set both system_prompt and system_prompt_path along with profile_name but only assert on profile prompts. Either:
- Assert that the resolved prompt precedence is as intended, or
- Remove the unused fields for a tighter, purpose-built test.
If you keep all fields, I can help add assertions to validate precedence end-to-end via endpoints.get_system_prompt.
src/models/config.py (1)
354-356: Optional: avoid mutable default for lists in BaseModelmcp_servers: [] can be expressed with default_factory to avoid shared defaults, though pydantic typically copies defaults. Consider this change for consistency.
- mcp_servers: list[ModelContextProtocolServer] = [] + mcp_servers: list[ModelContextProtocolServer] = Field(default_factory=list)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
README.md(2 hunks)src/constants.py(1 hunks)src/models/config.py(2 hunks)src/utils/checks.py(2 hunks)src/utils/endpoints.py(1 hunks)src/utils/profiles/rhdh/__init__.py(1 hunks)src/utils/profiles/rhdh/profile.py(1 hunks)tests/unit/test_configuration.py(2 hunks)tests/unit/utils/test_checks.py(2 hunks)tests/unit/utils/test_endpoints.py(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
tests/unit/utils/test_checks.py (1)
src/utils/checks.py (3)
profile_check(33-40)InvalidConfigurationError(12-13)read_profile_file(43-54)
src/utils/endpoints.py (2)
src/models/config.py (2)
config(121-127)get_prompts(296-298)src/configuration.py (1)
customization(108-113)
tests/unit/utils/test_endpoints.py (4)
src/models/config.py (3)
config(121-127)CustomProfile(277-298)get_prompts(296-298)src/models/requests.py (1)
QueryRequest(70-220)src/utils/endpoints.py (1)
get_system_prompt(43-83)src/configuration.py (2)
AppConfig(27-129)init_from_dict(50-52)
tests/unit/test_configuration.py (2)
src/models/config.py (4)
config(121-127)CustomProfile(277-298)ModelContextProtocolServer(150-155)get_prompts(296-298)src/configuration.py (3)
AppConfig(27-129)load_configuration(42-48)customization(108-113)
src/models/config.py (1)
src/utils/checks.py (2)
profile_check(33-40)read_profile_file(43-54)
🪛 LanguageTool
README.md
[grammar] ~281-~281: Ensure spelling is correct
Context: ...text before the question is sent to the selceted LLM. The default system prompt is desig...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~281-~281: This phrase is redundant. Consider writing “various” or “different”.
Context: ...o use a different system prompt through various different avenues available in the `customization...
(VARIOUS_DIFFERENT)
🪛 markdownlint-cli2 (0.17.2)
README.md
283-283: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🔇 Additional comments (8)
src/utils/profiles/rhdh/profile.py (2)
97-103: Instructions strings look good.The context/history instructions are concise and clear. No issues spotted.
205-216: PROFILE_CONFIG shape looks consistent and complete.Keys align with described usage (system_prompts.default/validation/topic_summary, query_responses.invalid_resp, instructions.context/history). No issues.
src/utils/endpoints.py (1)
67-75: Precedence addition looks correct and safe.Profile default is checked after honoring query-provided prompt and before configuration fallback. Guarding for None avoids KeyError. Behavior aligns with the PR objective.
tests/unit/utils/test_endpoints.py (1)
229-239: Nice coverage of profile vs. query precedenceThis test correctly verifies that a query-provided system_prompt overrides the profile default when not disabled. LGTM.
tests/unit/test_configuration.py (2)
5-5: Import changes look goodImporting CustomProfile here is consistent with the new test coverage that validates profile-driven customization.
440-477: Good validation of profile-driven customizationSolid check that a profile name in YAML results in a loaded CustomProfile and that prompts are populated. LGTM.
src/models/config.py (2)
22-22: Logger setup is correctModule-level logger is appropriately initialized. LGTM.
276-323: Invalid “Field(init=…)” warning is incorrect
Pydantic’sFieldfunction does support aninitparameter when used in conjunction with@pydantic.dataclasses.dataclass, allowing you to exclude fields from the generated__init__. The existing usageprompts: Dict[str, str] = Field(default={}, init=False) custom_profile: Optional[CustomProfile] = Field(default=None, init=False)is valid and behaves as intended in Pydantic dataclasses (docs.pydantic.dev). You can remove the proposed repository-wide scan and the related comment.
Likely an incorrect or invalid review comment.
| The service uses the, so called, system prompt to put the question into context before the question is sent to the selceted LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various different avenues available in the `customization` section: | ||
|
|
There was a problem hiding this comment.
Fix typo and tighten phrasing
- “selceted” → “selected”
- “various different” → “various” (redundant wording)
-The service uses the, so called, system prompt to put the question into context before the question is sent to the selceted LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various different avenues available in the `customization` section:
+The service uses the so-called system prompt to put the question into context before the question is sent to the selected LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various avenues available in the `customization` section:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The service uses the, so called, system prompt to put the question into context before the question is sent to the selceted LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various different avenues available in the `customization` section: | |
| The service uses the so-called system prompt to put the question into context before the question is sent to the selected LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various avenues available in the `customization` section: |
🧰 Tools
🪛 LanguageTool
[grammar] ~281-~281: Ensure spelling is correct
Context: ...text before the question is sent to the selceted LLM. The default system prompt is desig...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~281-~281: This phrase is redundant. Consider writing “various” or “different”.
Context: ...o use a different system prompt through various different avenues available in the `customization...
(VARIOUS_DIFFERENT)
🤖 Prompt for AI Agents
In README.md around lines 281 to 282, fix typos and tighten wording: change
“selceted” to “selected” and replace “various different avenues” with “various
avenues” (or simply “different avenues” / “various options”) so the sentence
reads smoothly and without redundancy.
| The service uses the, so called, system prompt to put the question into context before the question is sent to the selected LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt via the configuration option `system_prompt_path` in the `customization` section. That option must contain the path to the text file with the actual system prompt (can contain multiple lines). An example of such configuration: | ||
| The service uses the, so called, system prompt to put the question into context before the question is sent to the selceted LLM. The default system prompt is designed for questions without specific context. It is possible to use a different system prompt through various different avenues available in the `customization` section: | ||
|
|
||
| #### System Prompt Path |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix heading level increments (h2 → h3)
Headings under “## System prompt” should be level-3. This also satisfies markdownlint MD001.
-#### System Prompt Path
+### System Prompt Path
...
-#### System Prompt Literal
+### System Prompt Literal
...
-#### Custom Profile
+### Custom ProfileAlso applies to: 291-291, 300-300
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
283-283: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🤖 Prompt for AI Agents
In README.md around lines 283, 291, and 300, headings under the "## System
prompt" section are incorrectly using level-2 (##); change those headings to
level-3 (###) so they increment properly and satisfy markdownlint MD001 by
replacing the leading "##" with "###" at those specific lines.
| @dataclass | ||
| class CustomProfile: | ||
| """Custom profile customization for prompts and validation.""" | ||
|
|
||
| name: str | ||
| prompts: Dict[str, str] = Field(default={}, init=False) | ||
|
|
There was a problem hiding this comment.
Do not use pydantic.Field in a dataclass; avoid mutable defaults
In a pydantic dataclass, use dataclasses.field with default_factory. Also, passing init=False to pydantic.Field is invalid and a mutable {} default is unsafe.
Apply this diff to the dataclass field:
- prompts: Dict[str, str] = Field(default={}, init=False)
+ prompts: Dict[str, str] = field(default_factory=dict, init=False)And add the import (outside this hunk):
from dataclasses import field🤖 Prompt for AI Agents
In src/models/config.py around lines 276 to 282, the dataclass field 'prompts'
is incorrectly using pydantic.Field with a mutable default and init=False;
replace it with dataclasses.field using default_factory=dict (i.e., prompts:
Dict[str, str] = field(default_factory=dict)) and remove init=False and the
pydantic.Field usage, and add the import "from dataclasses import field" at the
top of the file.
| def _validate_and_process(self) -> None: | ||
| """Validate and load the profile.""" | ||
| checks.profile_check(self.name) | ||
| opened_profile = checks.read_profile_file("utils.profiles", self.name, logger) | ||
| if opened_profile is None: | ||
| logger.debug("Profile is empty or does not exist.") | ||
| else: | ||
| self.prompts = opened_profile.PROFILE_CONFIG.get("system_prompts", {}) | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Harden against missing/invalid PROFILE_CONFIG in profile modules
If PROFILE_CONFIG is absent or not a dict, this will raise at runtime. Guard and log instead of failing.
- else:
- self.prompts = opened_profile.PROFILE_CONFIG.get("system_prompts", {})
+ else:
+ cfg = getattr(opened_profile, "PROFILE_CONFIG", {})
+ if not isinstance(cfg, dict):
+ logger.warning(
+ "Profile module %r missing PROFILE_CONFIG dict; skipping.", self.name
+ )
+ return
+ self.prompts = cfg.get("system_prompts", {}) or {}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _validate_and_process(self) -> None: | |
| """Validate and load the profile.""" | |
| checks.profile_check(self.name) | |
| opened_profile = checks.read_profile_file("utils.profiles", self.name, logger) | |
| if opened_profile is None: | |
| logger.debug("Profile is empty or does not exist.") | |
| else: | |
| self.prompts = opened_profile.PROFILE_CONFIG.get("system_prompts", {}) | |
| def _validate_and_process(self) -> None: | |
| """Validate and load the profile.""" | |
| checks.profile_check(self.name) | |
| opened_profile = checks.read_profile_file("utils.profiles", self.name, logger) | |
| if opened_profile is None: | |
| logger.debug("Profile is empty or does not exist.") | |
| else: | |
| cfg = getattr(opened_profile, "PROFILE_CONFIG", {}) | |
| if not isinstance(cfg, dict): | |
| logger.warning( | |
| "Profile module %r missing PROFILE_CONFIG dict; skipping.", self.name | |
| ) | |
| return | |
| self.prompts = cfg.get("system_prompts", {}) or {} |
🤖 Prompt for AI Agents
In src/models/config.py around lines 287 to 295, the code assumes
opened_profile.PROFILE_CONFIG exists and is a dict which can raise at runtime;
change the logic to check that opened_profile is not None and that
hasattr(opened_profile, "PROFILE_CONFIG") and
isinstance(opened_profile.PROFILE_CONFIG, dict) before assigning to
self.prompts, otherwise set self.prompts to an empty dict and use logger.warning
or logger.debug to record that PROFILE_CONFIG was missing or invalid (include
the profile name) so the code continues safely without exceptions.
| custom_profile: Optional[CustomProfile] = Field(default=None, init=False) | ||
|
|
There was a problem hiding this comment.
Invalid Field argument on BaseModel; hide computed field from schema instead
Field(init=False) is not a valid pydantic Field kwarg. If you want this attribute to be computed/populated internally and excluded from serialization/schema, mark it as exclude=True.
- custom_profile: Optional[CustomProfile] = Field(default=None, init=False)
+ custom_profile: Optional[CustomProfile] = Field(default=None, exclude=True)🤖 Prompt for AI Agents
In src/models/config.py around lines 308-309, the Field currently uses an
invalid keyword init=False; replace it so the attribute is treated as an
internally-populated/computed field excluded from schema/serialization by using
Field(default=None, exclude=True) (or, if you intend a truly private attribute
not tracked by pydantic at all, convert it to a PrivateAttr) to remove the
invalid kwarg and hide it from the schema.
tisnik
left a comment
There was a problem hiding this comment.
we'll talk about this approach on today's OLS arch call. This is not clear if this approach is the right way how to configure things in LCORE. Please stay tuned
|
@Jdubrick we'll prefer to use mature configuration and not to use any customer-specific code, if possible. We hope it is manageable this was, as many specific things can be done via llama stack providers (guardrails etc.). WDYT? |
|
@tisnik in regards to hitting parity with how RCS was with the custom profiles (https://github.com/road-core/service/tree/main/ols/customize), to not use any customer-specific code, what would you recommend? Is it because the profile would be located in the codebase, would it be preferred if a path was provided instead? |
|
@tisnik If I uderstand correctly, the goal is to reach feature parity first, and LCS is currently missing the question validation and topic summary, so we need a way to provide additional prompt instructions besides the system prompt for query. |
|
@yangcao77 yes, we talked about this on arch meeting (I did not want to make decision on my own) and the preferred way is to use config file (that can be big, but whatever). Q validator and topic summary - is on plan, we are going to focus to be compatible in version 0.3.0 (to be delivered at end of September). |
|
We, Ansible Lightspeed, have an interest in profiles or however the ability to switch between configurations at runtime is to be termed/defined/implemented. We have a use-case to support customers switching between, for example, WatsonX or vllm or Azure inference providers at runtime; together with different models. I.e. without having to rebuild We know this will require the ability to define different System Prompts as, in our experience, the wording of it can differ between providers and models to optimise the LLMs response and optimise Tool Call selection. So, having some way to say "Use X profile" or "Use Y profile" etc would be very welcome. IIRC |
|
@ldjebran @TamiTakamiya @jameswnl JFYI We can probably overload our container image and This will relate to https://issues.redhat.com/browse/AAP-44959 |
Ok that is very good use case. And I think it should be added into LCORE, but IMHO it does not require project-specific code (which we'd like to avoid, if possible). So +100 to have customization profiles fully configurable + the ability to specify/select which profile to use. We have |
I wholeheartedly agree.. I'd prefer not to write project-specific code. We use Each profile is in essence an amalgamation of:
We have the additional consideration of possibly supporting different profile settings for external providers too: I think profile support in the external providers is out of the initial scope.. it's not straight forward. This PR suggests @Jdubrick may have had similar thoughts; but it's not clear how they were intended to be used. |
|
@tisnik it sounds like we want to keep this profile type of customization out of the project code so I can close this PR. While it seems like there is a desire for my team and others to have this kind of customization however, I know you mentioned via config file. Is it okay if I rework these changes and open a new PR where you pass a profile via a config file? cc @yangcao77 |
That would be awesome, thank you in advance! |
Description
In Road Core there was a way to have a profile that allowed you to have customized prompts, validation, etc. See: https://github.com/road-core/service/tree/main/ols/customize
This PR aims to add that same functionality, starting with prompts. It adds my teams profile (RHDH) and updates the way you can customize prompts, allowing you to choose a profile instead.
Also this PR updates the documentation to outline this new way to add prompts and adds ample tests.
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit