-
Notifications
You must be signed in to change notification settings - Fork 0
Add sample prompt for creating tailpipe table #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new documentation section that provides a sample prompt for creating Tailpipe tables using AI tools, along with instructions on how to build and validate the changes.
- Added a new sidebar entry for "Using AI" in the docs configuration
- Created a detailed guide in docs/develop/using-ai.md explaining the process and best practices for using AI to create Tailpipe tables
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
File | Description |
---|---|
docs/sidebar.json | Added a new sidebar entry for the "Using AI" doc |
docs/develop/using-ai.md | Introduced a comprehensive guide for AI usage |
Preview Available 🚀Commit Author: Karan Popat Preview Link: tailpipe-io-git-docs-add-ai-doc-section-turbot.vercel.app |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few points/questions
### Prerequisites | ||
|
||
1. Open the plugin repository in your IDE (Cursor, VS Code, Windsurf, etc.) to give AI tools access to all existing code and documentation. | ||
2. Ensure you have Tailpipe installed (`brew install turbot/tap/tailpipe` for MacOS or the installation script for Linux/WSL). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth making Linux
and WSL
links to the page containing the script if we're not going to provide it:
1. Open the plugin repository in your IDE (Cursor, VS Code, Windsurf, etc.) to give AI tools access to all existing code and documentation. | ||
2. Ensure you have Tailpipe installed (`brew install turbot/tap/tailpipe` for MacOS or the installation script for Linux/WSL). | ||
3. Set up access credentials for the cloud provider (e.g., AWS credentials). | ||
4. Configure test log sources (e.g., S3 buckets with sample logs, CloudWatch Log Groups). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For artifact based sources, it's actually better to have some samples locally when developing, doesn't require a complex partition config like S3 bucket and also allows the AI tool to be able to cat
the file(s) and learn the actual structure.
2. Create the table implementation with: | ||
- Proper source metadata configuration | ||
- Row enrichment logic for standard and log-specific fields | ||
- Extractor implementation for parsing logs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extractor
isn't required 99% of the time, usually a Mapper
is all that is required.
|
||
1. Build the plugin using `make` command. | ||
|
||
2. Verify the table is registered using `tailpipe plugin list`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tailpipe plugin show <plugin_name>
will list out the Tables; plugin list merely shows configured partitions.
3. Check the table schema and structure using the Tailpipe MCP server | ||
|
||
4. Test basic querying functionality with `tailpipe query "select * from aws_<log_type> limit 1"`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tailpipe table show <table_name>
will output the schema (which can be done via MCP server.
A query will not work until there has been a collection of a partition though?
```md | ||
Your goal is to configure log sources for <log_type> to validate your table implementation. | ||
|
||
1. Configure appropriate source in ~/.tailpipe/config/aws.tpc: | ||
|
||
For S3 logs: | ||
partition "aws_<log_type>" "s3_logs" { | ||
source "aws_s3_bucket" { | ||
connection = connection.aws.test_account | ||
bucket = "test-logs-bucket" | ||
} | ||
} | ||
|
||
For CloudWatch logs: | ||
partition "aws_<log_type>" "cloudwatch_logs" { | ||
source "aws_cloudwatch_log_group" { | ||
connection = connection.aws.test_account | ||
log_group_name = "/aws/my-log-group" | ||
} | ||
} | ||
|
||
2. Ensure test logs are available in your configured source with sufficient data variety to test all table columns and features. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how well this will work, how does the AI know where your logs are?
No description provided.