-
Notifications
You must be signed in to change notification settings - Fork 23
Remove the default output buffer limit #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -46,7 +46,7 @@ public protocol OutputProtocol: Sendable, ~Copyable { | |
#endif | ||
extension OutputProtocol { | ||
/// The max amount of data to collect for this output. | ||
public var maxSize: Int { 128 * 1024 } | ||
public var maxSize: Int { .max } | ||
} | ||
|
||
/// A concrete `Output` type for subprocesses that indicates that | ||
|
@@ -240,10 +240,12 @@ extension OutputProtocol where Self == FileDescriptorOutput { | |
@available(SubprocessSpan, *) | ||
#endif | ||
extension OutputProtocol where Self == StringOutput<UTF8> { | ||
/// Create a `Subprocess` output that collects output as | ||
/// UTF8 String with 128kb limit. | ||
/// Create a `Subprocess` output that collects output as UTF8 String | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the documentation comment here should say that the buffer is unlimited, so the amount of memory required is proportional to the size of the output. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good |
||
/// with an unlimited buffer size. The memory requirement for collecting | ||
/// output is directly proportional to the size of the output | ||
/// emitted by the child process. | ||
public static var string: Self { | ||
.init(limit: 128 * 1024, encoding: UTF8.self) | ||
.init(limit: .max, encoding: UTF8.self) | ||
} | ||
} | ||
|
||
|
@@ -265,9 +267,11 @@ extension OutputProtocol { | |
@available(SubprocessSpan, *) | ||
#endif | ||
extension OutputProtocol where Self == BytesOutput { | ||
/// Create a `Subprocess` output that collects output as | ||
/// `Buffer` with 128kb limit. | ||
public static var bytes: Self { .init(limit: 128 * 1024) } | ||
/// Create a `Subprocess` output that collects output as a `Buffer` | ||
/// with an unlimited buffer size. The memory requirement for collecting | ||
/// output is directly proportional to the size of the output | ||
/// emitted by the child process. | ||
public static var bytes: Self { .init(limit: .max) } | ||
|
||
/// Create a `Subprocess` output that collects output as | ||
/// `Buffer` up to limit it bytes. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@iCharlesHu Why are we fixing a hang with increasing the buffer size to infinity? That's not quite the right fix I think.
if you're not streaming and you blow through the max size you should get an error, not a hang. That's also what AsyncProcess does.
in fact, you will now see a hang (followed by an OOM kill) if you run for example
cat /dev/zero
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your point, but struggling a bit to find the right answer here. If we put a limit in, and during testing someone only uses file sizes < 128k (for example), then a customer of the app is the one who hits the error. If we are unlimited, then it'll just use infinite memory, which is also a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safe and unexpected is better than unsafe an unexpected IMHO. So a limit is IMHO a must.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you get an error telling you "too much data to collect", it's immediately actionable. If your process just grinds to a halt or gets OOM killed, you need to do much more work to even figure out what the problem is.