add additional fields to output for non-error scenarios#114
add additional fields to output for non-error scenarios#114asamal4 merged 2 commits intolightspeed-core:mainfrom
Conversation
WalkthroughReorders CSV columns, replaces two numeric token fields on EvaluationResult with six optional string fields, serializes additional turn_data fields into JSON when building results, and updates tests and configuration to align with the new CSV schema and model fields. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–30 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🧰 Additional context used📓 Path-based instructions (2)config/system.yaml📄 CodeRabbit inference engine (AGENTS.md)
Files:
config/*.yaml📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧠 Learnings (2)📓 Common learnings📚 Learning: 2025-11-24T16:59:21.420ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (1)
169-184: Consider populating additional fields in error results for debugging.The
_create_error_resultmethod currently only populatesqueryandresponse. For better debuggability, consider also populatingcontexts,expected_response, and other fields when available, so users can see what input data led to the error.This is optional since error results are intentionally minimal, but having context can help diagnose issues.
def _create_error_result( self, request: EvaluationRequest, reason: str, execution_time: float ) -> EvaluationResult: """Create an ERROR result for failed evaluation.""" + turn_data = request.turn_data return EvaluationResult( conversation_group_id=request.conv_data.conversation_group_id, turn_id=request.turn_id, metric_identifier=request.metric_identifier, result="ERROR", score=None, threshold=None, reason=reason, - query=request.turn_data.query if request.turn_data else "", - response=request.turn_data.response or "" if request.turn_data else "", + query=turn_data.query if turn_data else "", + response=turn_data.response or "" if turn_data else "", + contexts=_to_json_str(turn_data.contexts) if turn_data else None, + expected_response=turn_data.expected_response if turn_data else None, execution_time=execution_time, )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/lightspeed_evaluation/core/constants.py(1 hunks)src/lightspeed_evaluation/core/models/data.py(1 hunks)src/lightspeed_evaluation/pipeline/evaluation/evaluator.py(4 hunks)tests/unit/core/output/test_generator.py(3 hunks)tests/unit/pipeline/evaluation/test_evaluator.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Use type hints for all public functions and methods
Use Google-style docstrings for all public APIs
Use custom exceptions fromcore.system.exceptionsfor error handling
Use structured logging with appropriate levels
Files:
src/lightspeed_evaluation/core/constants.pysrc/lightspeed_evaluation/core/models/data.pysrc/lightspeed_evaluation/pipeline/evaluation/evaluator.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
tests/**/*.py: Use pytest with pytest-mock (mocker fixture), not unittest.mock, for all mocking
Test files should use naming conventiontest_*.pyfor files,test_*for functions, andTest*for classes
Files:
tests/unit/pipeline/evaluation/test_evaluator.pytests/unit/core/output/test_generator.py
🧠 Learnings (7)
📓 Common learnings
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: All new evaluation features should be added to `src/lightspeed_evaluation/` core framework
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/output/generator.py:140-145
Timestamp: 2025-09-11T12:47:06.747Z
Learning: User asamal4 prefers that non-critical comments are sent when actual code changes are pushed, not on unrelated commits.
📚 Learning: 2025-07-29T05:15:39.782Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 22
File: lsc_agent_eval/src/lsc_agent_eval/core/agent_goal_eval/evaluator.py:87-100
Timestamp: 2025-07-29T05:15:39.782Z
Learning: In the lsc_agent_eval framework, the substring evaluation logic in the `_evaluate_substring` method requires ALL expected keywords to be present in the agent response (logical AND), not just any keyword (logical OR). This is a stricter evaluation condition that was intentionally changed and may be subject to future modifications.
Applied to files:
tests/unit/pipeline/evaluation/test_evaluator.py
📚 Learning: 2025-09-19T12:32:06.403Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 55
File: src/lightspeed_evaluation/pipeline/evaluation/errors.py:18-31
Timestamp: 2025-09-19T12:32:06.403Z
Learning: When analyzing method calls, always examine the complete call site including all parameters before suggesting fixes. In the lightspeed-evaluation codebase, mark_all_metrics_as_error in processor.py correctly passes both resolved_turn_metrics and resolved_conversation_metrics parameters.
Applied to files:
tests/unit/pipeline/evaluation/test_evaluator.py
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: All new evaluation features should be added to `src/lightspeed_evaluation/` core framework
Applied to files:
src/lightspeed_evaluation/core/models/data.pytests/unit/core/output/test_generator.py
📚 Learning: 2025-07-16T13:20:45.006Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:0-0
Timestamp: 2025-07-16T13:20:45.006Z
Learning: In the lsc_agent_eval package, evaluation results use distinct values: "FAIL" means the evaluation ran successfully but the result was negative, while "ERROR" means there was an issue executing the evaluation itself (e.g., setup script failed, API connection failed).
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-07-16T13:20:40.632Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:0-0
Timestamp: 2025-07-16T13:20:40.632Z
Learning: In the lsc_agent_eval package, evaluation results use "FAIL" for evaluations that ran but didn't pass the criteria, and "ERROR" for errors in the evaluation process itself (like setup script failures, API errors, etc.).
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-07-16T13:21:53.225Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:190-214
Timestamp: 2025-07-16T13:21:53.225Z
Learning: In the lsc_agent_eval framework, there's a distinction between FAIL and ERROR results:
- FAIL: evaluation failed (e.g., script verification failed, agent response doesn't match expected criteria) - result.error is None
- ERROR: error running eval (e.g., setup script failed, agent API error) - result.error contains error message
Applied to files:
tests/unit/core/output/test_generator.py
🧬 Code graph analysis (3)
tests/unit/pipeline/evaluation/test_evaluator.py (1)
src/lightspeed_evaluation/core/api/client.py (1)
query(71-105)
src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (1)
src/lightspeed_evaluation/core/api/client.py (1)
query(71-105)
tests/unit/core/output/test_generator.py (1)
src/lightspeed_evaluation/core/models/data.py (1)
EvaluationResult(370-428)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: tests (3.12)
- GitHub Check: tests (3.13)
- GitHub Check: tests (3.11)
- GitHub Check: mypy
🔇 Additional comments (8)
src/lightspeed_evaluation/core/constants.py (1)
35-40: LGTM!The new CSV columns are correctly added and align with the new optional fields in
EvaluationResult. The naming is consistent and the ordering (actual values followed by expected values) is logical.tests/unit/pipeline/evaluation/test_evaluator.py (2)
156-158: LGTM!The test correctly validates that the new fields (
query,response,contexts) are populated from turn data in successful evaluation paths. The JSON serialization ofcontextsas'["Context"]'aligns with the_to_json_strhelper behavior.
367-370: LGTM!The test correctly expects
contextsandexpected_responseto beNonein the exception handling path, which aligns with_create_error_resultnot populating these optional fields.tests/unit/core/output/test_generator.py (2)
328-331: LGTM!The test correctly uses pre-serialized JSON strings for
contextsandexpected_keywords, matching the expected format after_to_json_strprocessing in the evaluator.
375-390: LGTM!Comprehensive assertions validating CSV content with new fields. The test correctly expects:
- JSON-serialized strings for
contextsandexpected_keywords- Empty string for
contextsin the ERROR row (CSV representation ofNone)src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (2)
26-33: LGTM!The
_to_json_strhelper correctly handles serialization with appropriate fallbacks:
- Returns
NoneforNone, empty lists, and empty dicts- Uses
json.dumpswithdefault=strfor robust serialization- Falls back to
str()on serialization errorsAs per coding guidelines, the function has type hints for all parameters and return type.
138-159: LGTM!The result construction correctly populates the new fields:
- Extracts
turn_datafor cleaner repeated access- Applies
_to_json_strto complex fields (tool_calls,contexts,expected_keywords,expected_tool_calls)- Preserves string fields (
expected_response,expected_intent) directly- Handles missing
turn_datagracefully with appropriate defaultssrc/lightspeed_evaluation/core/models/data.py (1)
400-420: LGTM!The new optional fields are well-defined with:
- Appropriate types (
Optional[str]for JSON-serialized data)- Clear descriptions documenting their purpose
- Consistent default values (
None)The field additions align with the evaluator's usage and CSV column definitions in constants. As per coding guidelines, the model uses type hints appropriately.
7f3176a to
032bd43
Compare
032bd43 to
12f424a
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (1)
27-35: Consider edge case: nested objects with circular references.The
_to_json_strhelper usesdefault=stras a fallback for non-serializable types, which is good. However, ifvaluecontains circular references or deeply nested objects,json.dumpscould raiseRecursionError(not caught) or produce unexpectedly large strings.Consider adding a recursion depth limit or handling
RecursionError:def _to_json_str(value: Any) -> Optional[str]: """Convert any value to JSON string. Returns None for empty values.""" if value is None or value == [] or value == {}: return None try: return json.dumps(value, indent=None, default=str) - except (TypeError, ValueError): + except (TypeError, ValueError, RecursionError): return str(value)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/lightspeed_evaluation/core/constants.py(1 hunks)src/lightspeed_evaluation/core/models/data.py(1 hunks)src/lightspeed_evaluation/pipeline/evaluation/evaluator.py(4 hunks)tests/unit/core/output/test_generator.py(3 hunks)tests/unit/pipeline/evaluation/test_evaluator.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
tests/**/*.py: Use pytest with pytest-mock (mocker fixture), not unittest.mock, for all mocking
Test files should use naming conventiontest_*.pyfor files,test_*for functions, andTest*for classes
Files:
tests/unit/core/output/test_generator.pytests/unit/pipeline/evaluation/test_evaluator.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Use type hints for all public functions and methods
Use Google-style docstrings for all public APIs
Use custom exceptions fromcore.system.exceptionsfor error handling
Use structured logging with appropriate levels
Files:
src/lightspeed_evaluation/core/models/data.pysrc/lightspeed_evaluation/pipeline/evaluation/evaluator.pysrc/lightspeed_evaluation/core/constants.py
🧠 Learnings (7)
📓 Common learnings
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/output/generator.py:140-145
Timestamp: 2025-09-11T12:47:06.747Z
Learning: User asamal4 prefers that non-critical comments are sent when actual code changes are pushed, not on unrelated commits.
📚 Learning: 2025-07-16T13:20:45.006Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:0-0
Timestamp: 2025-07-16T13:20:45.006Z
Learning: In the lsc_agent_eval package, evaluation results use distinct values: "FAIL" means the evaluation ran successfully but the result was negative, while "ERROR" means there was an issue executing the evaluation itself (e.g., setup script failed, API connection failed).
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-07-16T13:20:40.632Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:0-0
Timestamp: 2025-07-16T13:20:40.632Z
Learning: In the lsc_agent_eval package, evaluation results use "FAIL" for evaluations that ran but didn't pass the criteria, and "ERROR" for errors in the evaluation process itself (like setup script failures, API errors, etc.).
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-07-16T13:21:53.225Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:190-214
Timestamp: 2025-07-16T13:21:53.225Z
Learning: In the lsc_agent_eval framework, there's a distinction between FAIL and ERROR results:
- FAIL: evaluation failed (e.g., script verification failed, agent response doesn't match expected criteria) - result.error is None
- ERROR: error running eval (e.g., setup script failed, agent API error) - result.error contains error message
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-07-28T14:26:03.119Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 22
File: lsc_agent_eval/src/lsc_agent_eval/core/agent_goal_eval/eval_data.py:146-153
Timestamp: 2025-07-28T14:26:03.119Z
Learning: In the lsc_agent_eval framework, evaluations are identified by a composite key of (conversation_group, eval_id). This design allows the same eval_id to exist across different conversation groups (logged as warning) but prevents duplicates within the same conversation group (validation error). This supports logical separation and reusable eval_ids across different conversation contexts.
Applied to files:
tests/unit/core/output/test_generator.py
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: All new evaluation features should be added to `src/lightspeed_evaluation/` core framework
Applied to files:
src/lightspeed_evaluation/core/models/data.pytests/unit/pipeline/evaluation/test_evaluator.py
📚 Learning: 2025-09-09T14:58:10.630Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/pipeline/evaluation/amender.py:32-41
Timestamp: 2025-09-09T14:58:10.630Z
Learning: In the lightspeed-evaluation framework, when API is enabled, every turn should make a fresh API call regardless of whether the turn already has response or tool_calls data. This ensures consistency and fresh responses for each evaluation run.
Applied to files:
src/lightspeed_evaluation/pipeline/evaluation/evaluator.py
🧬 Code graph analysis (2)
tests/unit/core/output/test_generator.py (2)
src/lightspeed_evaluation/core/api/client.py (1)
query(72-106)src/lightspeed_evaluation/core/models/data.py (1)
EvaluationResult(378-446)
tests/unit/pipeline/evaluation/test_evaluator.py (1)
src/lightspeed_evaluation/core/api/client.py (1)
query(72-106)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: mypy
- GitHub Check: tests (3.12)
- GitHub Check: tests (3.13)
- GitHub Check: tests (3.11)
🔇 Additional comments (8)
tests/unit/pipeline/evaluation/test_evaluator.py (2)
157-160: LGTM! Test validates new fields correctly.The test appropriately verifies that turn-level evaluation results include the new
query,response, andcontextsfields populated from the input turn data. Thecontextsfield is correctly expected as a JSON-serialized string.
368-372: LGTM! Exception path test correctly validates minimal field population.The test appropriately verifies that when an exception occurs during evaluation, the result still carries the basic
queryandresponsefrom the turn data, while optional fields likecontextsandexpected_responseare set toNone. This aligns with the error handling behavior in the evaluator.src/lightspeed_evaluation/pipeline/evaluation/evaluator.py (2)
165-195: LGTM! Result construction properly populates new fields.The updated
EvaluationResultconstruction correctly:
- Uses a local
turn_datavariable for consistent access- Populates new fields (
tool_calls,contexts,expected_response,expected_intent,expected_keywords,expected_tool_calls) using_to_json_strfor serialization- Maintains null-safety with
if turn_dataguards- Retains existing token tracking fields
206-227: Verify field defaults in EvaluationResult model for optional fields.The
_create_error_resultmethod omits new optional fields (tool_calls,contexts,expected_response,expected_intent,expected_keywords,expected_tool_calls), while the success path populates them. Check whether these fields havedefault=Nonein the EvaluationResult model definition. If they do, the current implementation is safe; if not, explicitly set them toNonefor consistency with test expectations (lines 368-372 in test_evaluator.py expect these asNone).tests/unit/core/output/test_generator.py (2)
331-373: LGTM! Test coverage expanded to validate new fields.The test appropriately validates:
- Row 1: New fields
query,response,contexts, andexpected_keywordsare populated and correctly serialized- Row 2: The
expected_responsefield is included- Row 3: ERROR scenario with minimal fields (query populated, contexts empty)
This aligns with the PR objective to add additional fields for non-error scenarios while keeping error output minimal.
390-406: LGTM! Assertions validate CSV structure and new fields.The assertions correctly verify:
- Three rows in the CSV output
- Row 1 contains the new fields with proper values
- Row 2 includes
expected_response- Row 3 (ERROR) has minimal data with empty contexts
src/lightspeed_evaluation/core/constants.py (1)
25-46: LGTM! CSV columns reordered and expanded logically.The updated
SUPPORTED_CSV_COLUMNSlist:
- Reorders
execution_timeto followthreshold(more logical grouping with evaluation metadata)- Places token-related fields together (
api_input_tokens,api_output_tokens,judge_llm_input_tokens,judge_llm_output_tokens)- Adds six new fields at the end for turn data (
tool_calls,contexts,expected_response,expected_intent,expected_keywords,expected_tool_calls)The ordering is logical and groups related fields together.
src/lightspeed_evaluation/core/models/data.py (1)
410-438: Note: Inconsistency between AI summary and actual implementation.The AI summary states that
api_input_tokensandapi_output_tokenswere "replaced" with the new fields, but the actual code shows these fields are retained (lines 410-413) alongside the new optional string fields (lines 420-438). This is the correct approach, as token tracking is independent of the new turn data fields.The field definitions are well-structured:
- Existing token fields retained for backward compatibility
- Six new optional string fields added with clear descriptions
- All new fields default to
Nonefor optional population
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
config/system.yaml (1)
163-183: Clarify data field vs. metric distinction in documentation.The new CSV columns include output data fields (tokens, contexts, expected values) that are distinct from metrics defined in
metrics_metadata. Consider adding a brief comment in the YAML to clarify thatcsv_columnsincludes both evaluation metrics and raw data fields from turn execution, to aid future maintainers.# CSV columns to include (includes both metrics and raw turn execution data) csv_columns: # Identifiers - "conversation_group_id" - "turn_id" # Metric evaluation results - "metric_identifier" - "result" - "score" - "threshold" - "reason" # Turn execution metadata - "execution_time" # Turn input/output - "query" - "response" # API call token metrics - "api_input_tokens" - "api_output_tokens" - "judge_llm_input_tokens" - "judge_llm_output_tokens" # Tool and context data - "tool_calls" - "contexts" # Expected values for comparison - "expected_response" - "expected_intent" - "expected_keywords" - "expected_tool_calls"
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
config/system.yaml(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
config/system.yaml
📄 CodeRabbit inference engine (AGENTS.md)
Add new metrics metadata to
config/system.yamlin the metrics_metadata section
Files:
config/system.yaml
config/*.yaml
📄 CodeRabbit inference engine (AGENTS.md)
Add sample evaluation data YAML file when adding new features
Files:
config/system.yaml
🧠 Learnings (2)
📓 Common learnings
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 26
File: lsc_agent_eval/src/lsc_agent_eval/core/utils/api_client.py:124-151
Timestamp: 2025-08-22T09:16:29.070Z
Learning: In lsc_agent_eval project, the maintainer (asamal4) prefers reactive error handling - adding support for additional error response fields only when they occur in practice, rather than preemptively handling all possible error formats.
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/output/generator.py:140-145
Timestamp: 2025-09-11T12:47:06.747Z
Learning: User asamal4 prefers that non-critical comments are sent when actual code changes are pushed, not on unrelated commits.
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: Applies to config/system.yaml : Add new metrics metadata to `config/system.yaml` in the metrics_metadata section
Applied to files:
config/system.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: tests (3.11)
- GitHub Check: mypy
- GitHub Check: tests (3.12)
- GitHub Check: tests (3.13)
🔇 Additional comments (1)
config/system.yaml (1)
163-183: Verify all new CSV columns are supported by the data model and output generator.The CSV configuration adds 12 new columns including token counts, contexts, and expected values. Before merging, ensure these columns are:
- Defined in the data model (EvaluationResult or similar)
- Properly serialized in the output generator
- Consistently named across the codebase
Additionally, confirm that a sample evaluation data YAML file has been added per project guidelines.
VladimirKadlec
left a comment
There was a problem hiding this comment.
Please add the description of the new output columns to the table in docs/configuration.md
Other than that LGTM.
2ac11be to
7aa1060
Compare
@VladimirKadlec done. thank you !! |
Description
Added additional fields to output. This will help us to do quick comparison for non-error scenarios. For error scenarios the data is minimal as per the existing logic.
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
New Features
Changed Behavior
Tests
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.