This repository compares Robot Framework test suites generated by different AI tools (GitHub Copilot, Claude Code, GitLab Duo, Amazon Q). Place each tool’s outputs here, then use the provided comparison prompt and template to generate a single, evidence-based comparison report.
Create one folder per tool (named by the AI tool), for example Tools/GitHub Copilot/ or Tools/AmazonQ/. Inside that folder include:
chat/— the chat transcript(s) with the assistantrobot_tests/— Robot Framework suites and resources produced by the toolrobot_results/— execution artifacts and Robocop reports (latest timestamp preferred)- Tool-specific RF standards files for adherence scoring: one of
.github/,.claude/,.amazonq/rules,.gitlab/duo
Example:
Tools/
GitHub Copilot/
chat/
robot_tests/
robot_results/
.github/
AmazonQ/
chat/
robot_tests/
robot_results/
.amazonq/rules
- AI tools comparison-TEMPLATE.md: The scoring matrix the assistant will fill.
- Documentation reference (validation): RF-docs-MCP-server/rf_docs_server.py and RF-docs-MCP-server/generate_library_docs.sh.
- Prepare folders: Add one folder per tool under
Tools/withchat/,robot_tests/,robot_results/, and the tool’s standards files. - Create your comparison round file:
cp "AI tools comparison-TEMPLATE.md" "AI tools comparison - Round x - Model y.md"- Initialize RF docs and services (MCP-ready):
Run the fast start to build the container, generate library docs, and prepare MCP config for your IDE. The MCP server is stdio-based and is spawned by clients when needed.
./fast-start.shAfter it completes, reload VS Code so it picks up the MCP configuration (Command Palette → Developer: Reload Window), or restart VS Code.
- Start the comparison in your AI assistant:
- Get the comparison prompt from the Confluence and provide it to your assitant in this repository context
- The assistant analyzes the per-tool folders, latest Robocop results, and chat transcripts, then fills your "Round x" file.
- Evidence and scoring:
- Cite evidence via file paths and line ranges from each tool’s
robot_tests/androbot_results/. - Use chat transcripts in
chat/for “Prompt Responsiveness & Control”. - Use the latest Robocop report per tool from
robot_results/for the static analysis category.
- AI will use the documentation references in this repo to validate versions, syntax, and keyword usage during scoring:
- Robot Framework: version 7.4.1 (see RF-docs-MCP-server/rf_docs_server.py).
- Browser library: version 19.12.3; RequestsLibrary: version 0.9.7 (see RF-docs-MCP-server/generate_library_docs.sh).
- This repo’s purpose is comparison only. Any steps to run environments, apps, or test generation live elsewhere (in the source tool projects).