tools: add first draft of AGENTS.md (tested with Gemini, edited by Claude, and Codex)#26502
tools: add first draft of AGENTS.md (tested with Gemini, edited by Claude, and Codex)#26502spytheman wants to merge 22 commits intovlang:masterfrom
Conversation
70aac80 to
9bc0eda
Compare
…Variables sections Adds three high-value sections to help AI agents work more effectively with the V compiler: - Error Reporting: API for c.error(), c.warn(), c.note() in checker/parser - Option/Result Types: Syntax, common bugs, test locations, and cgen pitfalls - Environment Variables: VFLAGS, VAUTOFIX, VEXE, and V2-specific variables These additions provide timeless guidance on compiler internals without time-specific references. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…nually, thanks Brother Richard)
Split two lines exceeding 100 character limit to pass markdown linting. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Make it clear that check-md must be run before committing .md files: - Added "(required for .md files before commits)" to Tools section - Updated Gotchas to mention both fmt and check-md requirements Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…ler models to benefit from the AGENTS.md file.
|
I think it works well enough as a first draft. |
|
@medvednikov what do you think? |
|
Great job. I'd also add that it should generally avoid the |
|
Just hit this one... need a rule to tell the agent to put all v command line options between
Which fails... will also help with I wish V had always enforced this, but... we are where we are. |
…nd before the subcommand/file. Edit the rest for clarity.
|
|
The current rules say to build |
I think some apps may hardcoded |
|
@kbkpbot thanks. I was worried about ensuring self compilation at all times (as a kind of additional test, and to make sure the AI does not get deceived by a stale binary), but you are right, saving it may be faster in more cases, especially smaller fixes that will be for examples/tools, that do not need to change the compiler. I'll test changing the policy. For the record, with the current one from this PR, from 2532e0f , that uses The review after that, which also ran tests took 3m 42s (but this time without |
|
Hey @spytheman, went through this pertty heavily with The AIs basically said it needed more information / guardrails, etc to be useful to themselves properly and here's what shook out (I shared part of this with @medvednikov already, but this is a bit more updated from what I shared when he asked me): AGENTS.md |
|
@ylluminate , thank you 🙇🏻 . I've compared the version from here and your V_AGENTS.md first by length: and then with The result: Details```md ─ Worked for 4m 25s ──────────────────────• Overall Scores
Assessment basis: I read both files fully, cross-checked against CONTRIBUTING.md, TESTS.md, and doc/docs.md, and spot-validated commands locally. Scoring Breakdown
Why V_AGENTS.md scores higher
Critical weaknesses in V_AGENTS.md (why not 90+)
Why PR_26502_AGENTS.md is decent but weaker
Main gaps in PR_26502_AGENTS.md
Recommendation for a future AGENTS.md
|
* `-printfn <n> -o file.c` emits only the named C function to the
output file. The name uses the `modulename__fnname` format (e.g.
`main__main`). This flag can be repeated to print multiple
functions. Methods/generics may use more complex C names; use
`-keepc` to confirm exact symbols.this should become: * `-printfn <name> -o file.c` emits only the named C function to
standard output. The `name` uses the `modulename__fnname` format (e.g.
`main__main`). This flag can be repeated to print multiple
functions. Methods/generics may use more complex C names; use
`-keepc` to confirm exact symbols. |
|
If we allow AI to create PRs, perhaps we should have a rule that the first comment must include "Created by ". It would save time wondering if it was clever AI or a human that created it. Of course it's very obvious sometimes, but as the AI gets better... |
|
I've repeated the same test with the modified version of V_AGENTS.md that @ylluminate linked. The results are better, (essentially the same bug fix, but discovered faster -
|
|
I am a bit concerned by the size of the AGENTS.md file. According to the models I've tried, anything above ~250 lines and ~10KB, may be a problem for smaller models 🤔 . codex itself with 5.2 has no issues with it. |
I'll fix that separately. It is a problem indeed for some tools: edit: done in 11bf3dd . |
The ghostty project has https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md as a separate file. |
|
@spytheman in my experience the We are, very quickly, getting to the point where context windows will not be an issue any longer for things like this. I know I do not use any tools at all anymore that would present such a problem and any LLM that has a context window that can't work with this kind of size probably isn't something that can be counted on as reliable. |
Just so you know: you will never be able to enforce this. Making such a document is 100% wishful thinking. We have passed the threshold where this is actually discernable now for those that are adept at using such properly or invest time to curate the results properly when anything is slightly off. I have repeatedly seen that even the most advanced AI detection tools available at the highest price on the market now are beaten handily and rank AI generated content as "100% human"... 😆 Creating such a policy simply creates more headaches for the humans and is not dissimilar to having a robust CoC that creates pain and trouble. If you really MUST have a policy, it should be simply this:
No one gets a reward for not using it. There's no prize. Just maybe an imaginary high five for being a smart person. Fabricated ephemeral pats on the back. A side note: Crying and pointing "AI!!!!" is the "new thing" going forward. I'm actually friends with two people on both sides of this fence presently where one has been accused of genuine content being AI generated and another that is claiming it about another party. Now they are gearing up for court litigation... There is simply no way to prove things and there will never be a way to truly do it regardless of any amount of effort that goes into it. Intelligence is intelligence and whether or not we call it "artificial" doesn't matter since it is still probably going to be more intelligent than most who grace the surface of this world... Anyway, pontifications over, it's just the state we're in now. |
I did not do any significant work on these - but they should have things discussed thus far integrated as examples. |
The point of such a policy file, is not to declare a stance, that AI is good, or that AI is bad. It is to create an easy way for me, or anyone in my place, even a bot eventually, to dismiss and close quickly really low effort PRs (slop), that did not bother to even format the files / run the basic tests, before submitting. AIs just increase the chance of those happening a lot . |
I agree completely. Intent and free will however matters a lot, and unless AIs develop those qualities (and ethics ultimately), they will be tools that are used by an increasingly wide variety of people for all kinds of purposes. Most people like to help but are not good developers themselves. Some also just do random things for the lulz too. What motivated me to create this PR, is in part the desire, to help AIs produce a bit higher quality results by default, regardless of the intent of the people driving the said AI models. I am well aware, that skillful users can already make very high quality PRs - I've seen them here, just like I have seen the pure garbage too. |
|
@spytheman you make a really good point about slop and I think we're actually saying the same thing from different angles. Your concern isn't really about AI vs human - it's about quality standards. And you're right that AI lowers the barrier for low-effort submissions. But here's the thing: the AGENTS.md file we've been working on together IS the answer to that problem, and it's a far better one than any policy document could be. A policy file says "label yourself and we'll judge you." An AGENTS.md says "here's exactly how to do it right - no excuses." It raises the floor. If someone (human or AI) submits a PR that didn't run fmt, didn't run tests, didn't follow the workflow - the AGENTS.md made the expectations explicit and discoverable. The PR gets rejected on merit, not on provenance. That's cleaner and actually enforceable. Your second comment really resonates - "to help AIs produce a bit higher quality results by default, regardless of the intent of the people driving the said AI models." That's exactly right. The AGENTS.md is a quality multiplier that works regardless of who or what is reading it. A policy file is a speed bump that only honest people stop for. I think what you've built here with this PR is genuinely more valuable than any AI policy could be. You're not trying to gatekeep - you're trying to raise the bar... So yeah, that's the right move. |




No description provided.