-
Notifications
You must be signed in to change notification settings - Fork 0
Monitoring #53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitoring #53
Conversation
- Added AuditError method to logger for structured error logging. - Introduced AuditLog type for storing audit log entries. - Created audit_logs table migration for persistent storage of audit events. - Implemented audit worker for processing and flushing audit logs in batches. - Enhanced AuthMiddleware and various services to log errors using AuditError. - Added health check endpoint for audit logging status. - Updated configuration to include audit settings such as batch size and retention days. - Implemented bulk insert functionality for database operations. - Added tests for bulk insert and audit logging functionalities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a comprehensive audit logging system that captures logs to a database with a background flushing mechanism and cleanup features. The system ensures logs are not lost while providing a smooth developer experience through asynchronous processing and automatic retention management.
- Implements background audit worker with batching and retry mechanisms for reliable log persistence
- Adds bulk insert functionality to support efficient batch operations
- Integrates cleanup scheduler for automatic audit log retention management
Reviewed Changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
apps/server/workers/audit_logs.go |
Core audit worker implementation with batching, retry logic, and cleanup scheduling |
apps/server/types/queries.go |
Extends query parameters to support bulk insert operations |
apps/server/types/config.go |
Adds audit configuration structure |
apps/server/types/app.go |
Defines AuditLog type with UUID, timestamp, and hash fields |
apps/server/database/actions.go |
Implements bulk insert functionality with SQL generation |
apps/server/config/logger.go |
Adds AuditError method with hash generation for deduplication |
apps/server/config/config.go |
Integrates audit configuration loading from environment variables |
apps/server/main.go |
Wires up audit system initialization and graceful shutdown |
apps/server/migrations/audit_logs.sql |
Creates audit_logs table with constraints and indexes |
apps/server/api/internal/audit.go |
Provides endpoint for retrieving audit logs |
apps/server/tests/ |
Comprehensive test coverage for bulk insert and cleanup functionality |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| auditWg.Go(func() { | ||
| defer func() { | ||
| workerMutex.Lock() | ||
| workerRunning = false | ||
| workerMutex.Unlock() | ||
| }() | ||
| runAuditWorker(auditCtx) | ||
| }) |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method Go does not exist on sync.WaitGroup. Use auditWg.Add(1) before starting the goroutine and auditWg.Done() inside the goroutine.
| auditWg.Go(func() { | |
| defer func() { | |
| workerMutex.Lock() | |
| workerRunning = false | |
| workerMutex.Unlock() | |
| }() | |
| runAuditWorker(auditCtx) | |
| }) | |
| auditWg.Add(1) | |
| go func() { | |
| defer func() { | |
| workerMutex.Lock() | |
| workerRunning = false | |
| workerMutex.Unlock() | |
| }() | |
| defer auditWg.Done() | |
| runAuditWorker(auditCtx) | |
| }() |
| flushBatch(batch) | ||
| batch = batch[:0] |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using batch = make([]types.AuditLog, 0, cfg.Audit.BatchSize) instead of batch[:0] to ensure the underlying array capacity is maintained and avoid potential memory reallocations.
| auditEntry := map[string]any{ | ||
| "timestamp": entry.Timestamp, | ||
| "level": entry.Level, | ||
| "message": entry.Message, | ||
| "attrs": entry.Attrs, | ||
| "entry_hash": entry.EntryHash, | ||
| } |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The manual mapping from struct to map is error-prone and duplicates field names. Consider using struct tags and reflection or a serialization library to automatically convert the AuditLog struct.
| ErrNoWhereCondition = fmt.Errorf("no WHERE condition provided for update/delete operation") | ||
| ErrNoRawSQL = fmt.Errorf("no raw SQL provided for raw operation") | ||
| ErrInvalidOperation = fmt.Errorf("invalid operation specified") | ||
| ErrBothDataAndEntriesProvided = fmt.Errorf("cannot provide both Data and Entries for insert operation - use one or the other") |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message could be more specific about which field to use in which scenario. Consider: 'cannot provide both Data and Entries for insert operation - use Data for single inserts or Entries for bulk inserts'
| ErrBothDataAndEntriesProvided = fmt.Errorf("cannot provide both Data and Entries for insert operation - use one or the other") | |
| ErrBothDataAndEntriesProvided = fmt.Errorf("cannot provide both Data and Entries for insert operation - use Data for single inserts or Entries for bulk inserts") |
| // Store query for debugging | ||
| result.Query = sql | ||
| result.Args = values |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Query and Args should be stored for both bulk insert and single insert paths for consistency. Currently only the single insert path stores these debugging values.
| return fmt.Sprintf("%x", hash) | ||
| } | ||
| return fallbackHash | ||
| } | ||
|
|
||
| // Generate SHA256 hash | ||
| hash := sha256.Sum256(jsonData) | ||
| return fmt.Sprintf("%x", hash) |
Copilot
AI
Sep 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider truncating the hash to reduce storage overhead since the full 64-character SHA256 hash may be unnecessarily long for deduplication purposes. A 16-32 character prefix would likely provide sufficient uniqueness.
| return fmt.Sprintf("%x", hash) | |
| } | |
| return fallbackHash | |
| } | |
| // Generate SHA256 hash | |
| hash := sha256.Sum256(jsonData) | |
| return fmt.Sprintf("%x", hash) | |
| return fmt.Sprintf("%x", hash)[:32] // Truncate to 32 hex chars (16 bytes) | |
| } | |
| return fallbackHash | |
| } | |
| // Generate SHA256 hash | |
| hash := sha256.Sum256(jsonData) | |
| return fmt.Sprintf("%x", hash)[:32] // Truncate to 32 hex chars (16 bytes) |
IMPORTANT: Use this template for all PRs to ensure consistency.
You can remove sections that do not apply to your change.
Description & Motivation
This pr introduces logs that get sent to the database in the background with a flushing mechanism.
This makes sure logs dont get lost and the developers have a smooth experience.
Related Issues
Closes #31
Scope of Change
(Mark or list the relevant ones)