Add proactive memory monitoring to prevent Lambda OOM deaths#2272
Merged
hiroshinishio merged 2 commits intomainfrom Feb 18, 2026
Merged
Add proactive memory monitoring to prevent Lambda OOM deaths#2272hiroshinishio merged 2 commits intomainfrom
hiroshinishio merged 2 commits intomainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
is_lambda_oom_approaching()that checks peak RSS memory each agent loop iteration viaresource.getrusage. When usage crosses 1792 MB (87.5% of 2048 MB limit), the agent bails out gracefully - same pattern as the existing timeout detection.should_bail()with priority: timeout > OOM > PR closed > branch deleted.Social Media Post (GitAuto)
When Lambda runs out of memory, AWS kills the process instantly. No cleanup runs, no CI gets triggered, no comment gets posted. The PR just goes silent. We added memory monitoring to the agent loop - same pattern as our existing timeout check. If RSS usage crosses 87% of the limit, the agent stops gracefully, pushes what it has, and lets CI run.
Social Media Post (Wes)
Lambda OOM is the worst failure mode. There's no signal handler, no graceful shutdown, no "almost out of memory" warning. AWS just kills your process. For us that meant the PR went silent - no CI, no comment, nothing. Borrowed our own timeout detection pattern: check resource.getrusage each loop iteration, bail at 87% of the limit. Simple but it only catches gradual growth, not sudden spikes. We'll see if that's enough.