make agents.md more concise#117
Conversation
WalkthroughThe PR restructures AGENTS.md documentation, removing legacy and deprecation notes in favor of a condensed, modernized layout with sections for Key Components, environment setup, architecture overview, and testing guidelines. No executable code is modified. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10–15 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
AGENTS.md (1)
17-20: Minor style refinement: Lead with action before context.The preamble "Before making changes..." can be tightened to strengthen directness. Consider consolidating:
-**Before making changes to deprecated/transitional directories:** -- Inform the user about the directory's status -- Get explicit confirmation before proceeding -- For new features: suggest adding to `src/lightspeed_evaluation/` instead +**Always inform the user and confirm before changes to deprecated/transitional directories.** For new features, suggest `src/lightspeed_evaluation/` instead.This is a minor polish and entirely optional.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
AGENTS.md(4 hunks)CLAUDE.md(0 hunks)CLAUDE.md(1 hunks)
🧰 Additional context used
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:59:09.244Z
Learning: Review AGENTS.md - this file contains agent-related guidelines and documentation
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/output/generator.py:140-145
Timestamp: 2025-09-11T12:47:06.747Z
Learning: User asamal4 prefers that non-critical comments are sent when actual code changes are pushed, not on unrelated commits.
📚 Learning: 2025-11-24T16:59:09.244Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:59:09.244Z
Learning: Review AGENTS.md - this file contains agent-related guidelines and documentation
Applied to files:
CLAUDE.mdAGENTS.md
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: Do not add features to the legacy `lsc_agent_eval/` directory; use `src/lightspeed_evaluation/` instead
Applied to files:
AGENTS.md
📚 Learning: 2025-07-16T09:42:00.691Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/data/script/eval3/setup.sh:1-3
Timestamp: 2025-07-16T09:42:00.691Z
Learning: Scripts in the lsc_agent_eval/data directory are meant to be simple examples/samples for teams to customize according to their needs, not production-ready code.
Applied to files:
AGENTS.md
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: All new evaluation features should be added to `src/lightspeed_evaluation/` core framework
Applied to files:
AGENTS.md
📚 Learning: 2025-07-16T10:41:09.399Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 19
File: lsc_agent_eval/tests/core/agent_goal_eval/test_evaluator.py:274-297
Timestamp: 2025-07-16T10:41:09.399Z
Learning: In the lsc_agent_eval package, the team prefers to focus on core functionality testing first and considers testing cleanup script execution after setup failure as early optimization, noting that there's no guarantee cleanup scripts will run successfully anyway.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-26T11:17:48.640Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 28
File: lsc_eval/runner.py:99-103
Timestamp: 2025-08-26T11:17:48.640Z
Learning: The lsc_eval generic evaluation tool is intended to become the primary evaluation framework, replacing an existing evaluation tool in the lightspeed-evaluation repository.
Applied to files:
AGENTS.md
📚 Learning: 2025-09-10T15:48:14.671Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/output/generator.py:43-49
Timestamp: 2025-09-10T15:48:14.671Z
Learning: In the lightspeed-evaluation framework, system configuration uses Pydantic data models (SystemConfig, OutputConfig, LoggingConfig, etc.) rather than plain dictionaries. Components like OutputHandler receive properly structured Pydantic models, so direct attribute access (e.g., system_config.output.enabled_outputs) is the correct approach.
Applied to files:
AGENTS.md
📚 Learning: 2025-09-09T08:08:40.654Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 47
File: src/lightspeed_evaluation/core/api/__init__.py:3-3
Timestamp: 2025-09-09T08:08:40.654Z
Learning: API_KEY and OPENAI_API_KEY are distinct environment variables with different purposes in the lightspeed-evaluation codebase - they should not be treated as interchangeable fallbacks.
Applied to files:
AGENTS.md
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: Applies to src/lightspeed_evaluation/core/metrics/custom/**/*.py : Custom metrics should be added to `src/lightspeed_evaluation/core/metrics/custom/` directory
Applied to files:
AGENTS.md
📚 Learning: 2025-09-18T23:59:37.026Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-evaluation PR: 55
File: src/lightspeed_evaluation/core/system/validator.py:146-155
Timestamp: 2025-09-18T23:59:37.026Z
Learning: In the lightspeed-evaluation project, the DataValidator in `src/lightspeed_evaluation/core/system/validator.py` is intentionally designed to validate only explicitly provided user evaluation data, not resolved metrics that include system defaults. When turn_metrics is None, the system falls back to system config defaults, and this validation separation is by design.
Applied to files:
AGENTS.md
📚 Learning: 2025-11-24T16:59:21.420Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-evaluation PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-24T16:59:21.420Z
Learning: Applies to config/*.yaml : Add sample evaluation data YAML file when adding new features
Applied to files:
AGENTS.md
🪛 LanguageTool
AGENTS.md
[style] ~17-~17: Consider shortening or rephrasing this to strengthen your wording.
Context: ... Active | ✅ Yes | ✅ Yes | Before making changes to deprecated/transitional directories: ...
(MAKE_CHANGES)
[style] ~67-~67: Consider using a different verb for a more formal wording.
Context: ...e, # pylint: disable`). Always try to fix the underlying issue. If a fix becomes ...
(FIX_RESOLVE)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Pylinter
- GitHub Check: tests (3.11)
- GitHub Check: tests (3.13)
- GitHub Check: mypy
- GitHub Check: tests (3.12)
🔇 Additional comments (6)
CLAUDE.md (1)
1-1: File appears to serve as a reference pointer only.Line 1 contains just a filename reference. If this file is intended to provide context or overview of AGENTS.md, consider whether additional descriptive content would be helpful for navigation.
AGENTS.md (5)
6-67: Comprehensive and well-organized guidelines—documentation quality is strong.The layout clearly separates critical guardrails (directory deprecation, pytest-mock requirement, quality checks) from supporting context. All key learnings are accurately reflected: directory statuses, pytest-mock enforcement, Pydantic models, API_KEY distinction, and custom metrics locations.
44-67: Quality checks section is thorough and actionable.The makefile targets and enforcement flow (fix → notify on pre-existing issues → re-run → complete) are clear and set appropriate expectations. The note about avoiding lint suppressions without justification is valuable.
96-101: Environment variables section appropriately defers to README.md.Listing key variables (OPENAI_API_KEY, API_KEY, KUBECONFIG) with brief context works well. Cross-reference to README.md for the full list is reasonable, assuming README.md is actively maintained with this information.
Can you confirm that
README.mdcontains a comprehensive environment variables section that covers all variables mentioned here and others? If README.md is out of sync or incomplete, this section should either expand or add a note about which variables are essential vs. optional.
104-118: Architecture diagram is clear and current.The module structure accurately reflects the
src/lightspeed_evaluation/layout and aligns with learnings about core framework organization (api, llm, metrics, models, output, script, system, pipeline, runner).
159-165: Adding new features guidance accurately reflects learnings.Custom metrics location and registration pattern are correct. Reference to updating
config/system.yamlwith metrics metadata aligns with learnings about YAML-based configuration.
Description
Improve AGENTS.md (make it more concise and highlight key guideline)
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.