Skip to content

Testing the gh actions workflow for contribution violations (no discussion)#5901

Closed
gh-action-test wants to merge 1 commit intogetsentry:masterfrom
gh-action-test:patch-3
Closed

Testing the gh actions workflow for contribution violations (no discussion)#5901
gh-action-test wants to merge 1 commit intogetsentry:masterfrom
gh-action-test:patch-3

Conversation

@gh-action-test
Copy link
Copy Markdown

Description

Issues

fake issue

  • resolves: LIN-1234

real issue without discussion

Reminders

@gh-action-test gh-action-test requested a review from a team as a code owner March 27, 2026 12:06
@sdk-maintainer-bot sdk-maintainer-bot bot added missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks. labels Mar 27, 2026
@sdk-maintainer-bot
Copy link
Copy Markdown

This PR has been automatically closed. The referenced issue does not show a discussion between you and a maintainer.

To avoid wasted effort on both sides, please discuss your proposed approach in the issue first and wait for a maintainer to respond before opening a PR.

Please review our contributing guidelines for more details.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 27, 2026

Semver Impact of This PR

None (no version bump detected)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Langchain

  • Set gen_ai.operation.name and gen_ai.pipeline.name on LLM spans by ericapisani in #5849
  • Broaden AI provider detection beyond OpenAI and Anthropic by ericapisani in #5707
  • Update LLM span operation to gen_ai.generate_text by ericapisani in #5796

Bug Fixes 🐛

Ci

  • Use gh CLI to convert PR to draft by stephanie-anderson in #5874
  • Use GitHub App token for draft PR enforcement by stephanie-anderson in #5871

Openai

  • Always set gen_ai.response.streaming for Responses by alexander-alderman-webb in #5697
  • Simplify Responses input handling by alexander-alderman-webb in #5695
  • Use max_output_tokens for Responses API by alexander-alderman-webb in #5693
  • Always set gen_ai.response.streaming for Completions by alexander-alderman-webb in #5692
  • Simplify Completions input handling by alexander-alderman-webb in #5690
  • Simplify embeddings input handling by alexander-alderman-webb in #5688

Other

  • (google-genai) Guard response extraction by alexander-alderman-webb in #5869
  • (workflow) Fix permission issue with github app and PR draft graphql endpoint by Jeffreyhung in #5887

Documentation 📚

  • Update CONTRIBUTING.md with contribution requirements and TOC by stephanie-anderson in #5896

Internal Changes 🔧

Langchain

  • Add text completion test by alexander-alderman-webb in #5740
  • Add tool execution test by alexander-alderman-webb in #5739
  • Add basic agent test with Responses call by alexander-alderman-webb in #5726
  • Replace mocks with httpx types by alexander-alderman-webb in #5724
  • Consolidate span origin assertion by alexander-alderman-webb in #5723
  • Consolidate available tools assertion by alexander-alderman-webb in #5721

Openai

  • Replace mocks with httpx types for streaming Responses by alexander-alderman-webb in #5882
  • Replace mocks with httpx types for streaming Completions by alexander-alderman-webb in #5879
  • Move input handling code into API-specific functions by alexander-alderman-webb in #5687

Other

  • (ai) Rename generate_text to text_completion by ericapisani in #5885
  • (asyncpg) Normalize query whitespace in integration by ericapisani in #5855
  • Merge PR validation workflows and add reason-specific labels by stephanie-anderson in #5898
  • Add workflow to close unvetted non-maintainer PRs by stephanie-anderson in #5895
  • Exclude compromised litellm versions by alexander-alderman-webb in #5876
  • Reactivate litellm tests by alexander-alderman-webb in #5853
  • Add note to coordinate with assignee before PR submission by sentrivana in #5868
  • Temporarily stop running litellm tests by alexander-alderman-webb in #5851

Other

  • Testing the gh actions workflow for contribution violations (no discussion) by gh-action-test in #5901
  • ci+docs: Add draft PR enforcement by stephanie-anderson in #5867

🤖 This preview updates automatically when you update the PR.

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.


class LogBatcher(Batcher["Log"]):
MAX_BEFORE_FLUSH = 100
MAX_BEFORE_FLUSH = 1_000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flush threshold equals drop threshold, eliminating burst headroom

Medium Severity

MAX_BEFORE_FLUSH is now equal to MAX_BEFORE_DROP (both 1_000), which eliminates the buffer between flushing and dropping. In Batcher.add(), the flush event is set only when the buffer reaches 1_000 items, but any new items arriving before the flush thread clears the buffer are immediately dropped. Previously, flushing triggered at 100, leaving 900 items of headroom. The _span_batcher.py comment explicitly states "MAX_BEFORE_FLUSH should be lower than MAX_BEFORE_DROP" for exactly this reason, and MetricsBatcher uses a 10x gap between the two thresholds.

Fix in Cursor Fix in Web


class LogBatcher(Batcher["Log"]):
MAX_BEFORE_FLUSH = 100
MAX_BEFORE_FLUSH = 1_000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test-only code change accidentally included in production code

High Severity

The PR title explicitly states this is "Testing the gh actions workflow for contribution violations," and the description references a "fake issue." The MAX_BEFORE_FLUSH change from 100 to 1_000 appears to be a throwaway modification made solely to produce a diff for workflow testing, not an intentional production change. Merging this would silently alter log batching behavior in production.

Fix in Cursor Fix in Web

Comment on lines +13 to 14
MAX_BEFORE_FLUSH = 1_000
MAX_BEFORE_DROP = 1_000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Setting MAX_BEFORE_FLUSH equal to MAX_BEFORE_DROP eliminates the safety buffer, causing logs to be dropped immediately when the flush threshold is met under load.
Severity: HIGH

Suggested Fix

Increase MAX_BEFORE_DROP to a value higher than MAX_BEFORE_FLUSH to restore the safety margin. For example, set MAX_BEFORE_DROP to 2_000 or 10_000, following the pattern of other batchers in the codebase.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: sentry_sdk/_log_batcher.py#L13-L14

Potential issue: The change sets `MAX_BEFORE_FLUSH` to `1_000` but leaves
`MAX_BEFORE_DROP` at the same value. In the `Batcher.add()` method, when the buffer size
reaches this threshold, a flush is triggered. However, any new log items that arrive
before the flush thread can drain the buffer will be immediately dropped because the
buffer size is still `>= MAX_BEFORE_DROP`. This eliminates the safety margin intended to
handle items arriving during a flush operation, a principle explicitly documented in
other parts of the codebase like `SpanBatcher`. Under any meaningful log burst, this
will cause logs to be silently dropped.

Did we get this right? 👍 / 👎 to inform future reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

gRPC aio ServerInterceptor missing isolation_scope per request

1 participant