Skip to content

Improve acceptance tests logging#3098

Open
st3penta wants to merge 1 commit intoconforma:mainfrom
st3penta:improve-acceptance-output
Open

Improve acceptance tests logging#3098
st3penta wants to merge 1 commit intoconforma:mainfrom
st3penta:improve-acceptance-output

Conversation

@st3penta
Copy link
Contributor

@st3penta st3penta commented Feb 5, 2026

Make the logs generated by the acceptance tests more readable, by just logging the failed scenarios, and by adding a summary of the failed tests at the end of the suite, to make the failures easily identifiable.

Make the logs generated by the acceptance tests more readable, by just
logging the failed scenarios, and by adding a summary of the failed
tests at the end of the suite, to make the failures easily identifiable.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Improve acceptance tests logging and output readability

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add scenario failure tracking with summary output at test end
• Suppress verbose container operation logs to reduce noise
• Log scenario start/end status directly to TTY for real-time visibility
• Exit with code 1 on test failure without verbose Go test output
Diagram
flowchart LR
  A["Test Execution"] -->|Track Failures| B["scenarioTracker"]
  A -->|Log to TTY| C["Real-time Status"]
  A -->|Filter Messages| D["shouldSuppress"]
  B -->|Print Summary| E["Failed Scenarios Report"]
  D -->|Reduce Noise| F["Cleaner Logs"]
  E --> G["Test Output"]
  C --> G
  F --> G
Loading

Grey Divider

File Changes

1. acceptance/acceptance_test.go ✨ Enhancement +80/-2

Add failure tracking and real-time scenario logging

• Add failedScenario struct and scenarioTracker to track failed test scenarios
• Implement addFailure() and printSummary() methods for failure tracking and reporting
• Log scenario start/end status to /dev/tty for real-time visibility during test execution
• Capture scenario failures in After hook and print summary before test exit
• Change exit behavior to use os.Exit(1) instead of t.Fatal() to avoid verbose output

acceptance/acceptance_test.go


2. acceptance/log/log.go ✨ Enhancement +65/-6

Filter verbose container operation logs

• Add shouldSuppress() function with patterns for verbose container operation logs
• Apply suppression filter to all logging methods: Log(), Logf(), Printf(), Warn(),
 Warnf(), Error(), Errorf(), Info(), Infof()
• Suppress messages containing patterns like "Creating container", "Container started", "Waiting for
 container", etc.
• Refactor logging methods to format messages before suppression check for consistency

acceptance/log/log.go


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (2) 📎 Requirement gaps (0)

Grey Divider


Action required

1. scenarioErr printed to stderr 📘 Rule violation ⛨ Security
Description
• The acceptance test output prints the raw scenarioErr value (%v) and writes scenario
  names/URIs directly to /dev/tty and os.Stderr.
• This output is unstructured (not JSON) and may inadvertently expose sensitive information embedded
  in error messages or scenario metadata in CI logs.
• This conflicts with secure logging/error-output expectations that logs avoid sensitive data and
  that detailed internal error details should not be exposed broadly.
Code

acceptance/acceptance_test.go[R95-104]

+	fmt.Fprintf(os.Stderr, "\n")
+	fmt.Fprintf(os.Stderr, "========================================\n")
+	fmt.Fprintf(os.Stderr, "FAILED SCENARIOS SUMMARY (%d)\n", len(st.failedScenarios))
+	fmt.Fprintf(os.Stderr, "========================================\n")
+	for i, fs := range st.failedScenarios {
+		fmt.Fprintf(os.Stderr, "%d. %s\n", i+1, fs.Name)
+		fmt.Fprintf(os.Stderr, "   Location: %s\n", fs.Location)
+		if fs.Error != nil {
+			fmt.Fprintf(os.Stderr, "   Error: %v\n", fs.Error)
+		}
Evidence
PR Compliance ID 5 requires logs to avoid sensitive data and be structured for auditing, and PR
Compliance ID 4 requires detailed internal error information to be limited to secure/internal logs.
The new summary and TTY logging prints scenario identifiers and the raw error string directly to
terminal/CI output, without structuring or redaction.

Rule 5: Generic: Secure Logging Practices
Rule 4: Generic: Secure Error Handling
acceptance/acceptance_test.go[95-104]
acceptance/acceptance_test.go[135-153]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The acceptance tests print raw `scenarioErr` and scenario metadata to `/dev/tty` and `os.Stderr` in a free-form format. This can leak sensitive details that may appear in error messages and produces unstructured logs that are harder to audit.

## Issue Context
Acceptance test logs commonly end up in CI systems and shared build logs. Error values may include internal details (and sometimes secrets), so output should be minimized/redacted or gated behind an explicit debug option.

## Fix Focus Areas
- acceptance/acceptance_test.go[95-110]
- acceptance/acceptance_test.go[135-153]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. os.Exit bypasses TestMain 🐞 Bug ⛯ Reliability
Description
• TestFeatures calls os.Exit(1) when the Godog suite fails, terminating the whole test binary
  immediately.
• This bypasses TestMain’s post-run logic (e.g., snaps.Clean), and can skip any remaining
  tests/cleanup in the package.
• It also degrades go test failure reporting to a raw “exit status 1”, making failures harder to
  interpret in CI.
Code

acceptance/acceptance_test.go[R214-222]

+	exitCode := suite.Run()
+
+	// Print summary of failed scenarios
+	tracker.printSummary(t)
+
+	if exitCode != 0 {
+		// Exit directly without t.Fatal to avoid verbose Go test output
+		os.Exit(1)
	}
Evidence
The test now exits the process directly from inside TestFeatures on non-zero suite exit code. Since
TestMain only runs cleanup after t.Run returns, this early os.Exit prevents snaps.Clean (and
anything after t.Run) from executing.

acceptance/acceptance_test.go[214-222]
acceptance/acceptance_test.go[225-232]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`TestFeatures` calls `os.Exit(1)` on failure. This aborts the entire test process immediately and bypasses `TestMain` cleanup (including `snaps.Clean`).

### Issue Context
The package defines a `TestMain` that expects to run after `t.Run()` completes. Calling `os.Exit` inside `TestFeatures` prevents this.

### Fix
Replace `os.Exit(1)` with a normal Go test failure (e.g., `t.Fatalf(...)` or `t.FailNow()` after printing the summary). If you want minimal noise, keep the custom summary printing but still fail via the testing API.

### Fix Focus Areas
- acceptance/acceptance_test.go[214-222]
- acceptance/acceptance_test.go[225-232]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. /dev/tty errors silently ignored 📘 Rule violation ⛯ Reliability
Description
• The code attempts to write to /dev/tty but ignores the failure case (err != nil) entirely,
  which is a silent failure when running in environments without a TTY (common in CI).
• This can cause missing start/stop diagnostics with no indication why, reducing debuggability and
  making failures harder to understand.
• Close errors are also ignored (tty.Close()), further reducing visibility into I/O problems.
Code

acceptance/acceptance_test.go[R135-153]

+		// Log scenario start - write to /dev/tty to bypass all output capture
+		if tty, err := os.OpenFile("/dev/tty", os.O_WRONLY, 0); err == nil {
+			fmt.Fprintf(tty, "\n▶ STARTING: %s (%s)\n", sc.Name, sc.Uri)
+			tty.Close()
+		}
+
		return context.WithValue(ctx, testenv.Scenario, sc), nil
	})

	sc.After(func(ctx context.Context, scenario *godog.Scenario, scenarioErr error) (context.Context, error) {
+		// Log scenario end with status - write to /dev/tty to bypass capture
+		if tty, err := os.OpenFile("/dev/tty", os.O_WRONLY, 0); err == nil {
+			if scenarioErr != nil {
+				fmt.Fprintf(tty, "✗ FAILED: %s (%s)\n\n", scenario.Name, scenario.Uri)
+			} else {
+				fmt.Fprintf(tty, "✓ PASSED: %s (%s)\n\n", scenario.Name, scenario.Uri)
+			}
+			tty.Close()
+		}
Evidence
PR Compliance ID 3 requires potential failure points to be handled with meaningful context rather
than silently ignored. The /dev/tty open and close operations can fail, but the code only logs
when err == nil and otherwise drops the event without any fallback or diagnostic message.

Rule 3: Generic: Robust Error Handling and Edge Case Management
acceptance/acceptance_test.go[135-153]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Writes to `/dev/tty` are best-effort, but currently failures to open/close the TTY are silently ignored.

## Issue Context
In CI or non-interactive runs there is often no TTY; without a fallback or warning, scenario start/end messages disappear with no explanation.

## Fix Focus Areas
- acceptance/acceptance_test.go[135-153]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. Errors can be suppressed 🐞 Bug ✧ Quality
Description
• The new suppression filter drops messages by substring match for Log/Info/Warn/Error paths.
• This can suppress error-level and teardown/wait lifecycle logs that are important for diagnosing
  hangs/leaks (e.g., kind global cluster destroy/wait messages).
• Because suppression happens before emitting, debugging information can disappear silently with no
  alternative signal.
Code

acceptance/log/log.go[R129-142]

func (l logger) Error(message string) {
+	if shouldSuppress(message) {
+		return
+	}
	l.Logf("[ERROR] %s", message)
}

func (l logger) Errorf(format string, args ...any) {
-	l.Logf("[ERROR] "+format, args...)
+	msg := fmt.Sprintf(format, args...)
+	if shouldSuppress(msg) {
+		return
+	}
+	l.Logf("[ERROR] %s", msg)
}
Evidence
The logger now conditionally returns early for Error/Errorf when shouldSuppress matches. The
suppression patterns include substrings present in kind cluster teardown logs (e.g., “Destroying
global cluster”, “Waiting for all consumers to finish”), so those logs will be dropped even though
they are valuable operational signals.

acceptance/log/log.go[62-77]
acceptance/log/log.go[129-142]
acceptance/kubernetes/kind/kind.go[412-425]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`shouldSuppress` is applied to `Error`/`Errorf` (and other levels), which can hide important failure/cleanup diagnostics.

### Issue Context
Suppression patterns include substrings that appear in kind cluster teardown/wait logs, so critical lifecycle signals can be dropped.

### Fix
Options (pick one):
1. Apply suppression only to low-level Info/Printf/Logf messages, never to Warn/Error.
2. Add an allowlist override: if the message is prefixed with `[ERROR]`/`[WARN ]` or contains teardown markers like `[Destroy]`, do not suppress.
3. Gate suppression behind a `-quiet-logs` flag or env var so CI/debug runs can keep full logs.

### Fix Focus Areas
- acceptance/log/log.go[62-85]
- acceptance/log/log.go[114-157]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@codecov
Copy link

codecov bot commented Feb 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Flag Coverage Δ
acceptance 55.47% <ø> (ø)
generative 18.59% <ø> (ø)
integration 27.60% <ø> (ø)
unit 68.35% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant