Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Enable G110 rule for gosec #13044

Merged
merged 5 commits into from
Apr 14, 2023
Merged

chore: Enable G110 rule for gosec #13044

merged 5 commits into from
Apr 14, 2023

Conversation

zak-pawel
Copy link
Collaborator

resolves #12897

@powersj powersj added the ready for final review This pull request has been reviewed and/or tested by multiple users and is ready for a final review. label Apr 6, 2023
Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think those changes are actual fixes. In all cases you copy an unlimited amount of data. You are just silencing the (valid) warnings by replacing the triggering io.Copy() with an infinite loop of copying chunks not triggering the dumb linter by using io.CopyN().

I think the only fix in all those spots is to either determine the _actual _ upper limit of the source's content size or by applying a user-set upper limit, either through code or via config options.

Comment on lines 258 to 268
_, err = io.Copy(d.buf, r)
if err != nil && !errors.Is(err, io.EOF) {
return nil, err

for {
_, err = io.CopyN(d.buf, r, 1024*1024)
if err != nil {
if errors.Is(err, io.EOF) {
break
}
return nil, err
}
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this different to the code before? You can still exhaust memory by a so called ZIP bomb. The only valid defense is to provide a upper decompression limit, i.e. specify n value passed to io.CopyN.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For tests it is easy, we can hardcode n value.

But ContentDecoders from internal/content_coding.go are used in:

  • AMQP Consumer Input Plugin
  • Socket Listener Input Plugin

For these two new configuration field should be added (with reasonable default value) to hold upper size limit.

@srebhan Will that be ok?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think that's OK. We should set this to unlimited somehow... @powersj any opinion?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds go to add, but how do we keep the unlimited? Maybe a setting of 0 == unlimited and if so we use io.Copy?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting it to unlimited by default won't really fix the issue ;)

@srebhan @powersj Please, review current implementation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, review current implementation.

Understood, but what risk do we run by putting in a new default and breaking users?

Copy link
Collaborator Author

@zak-pawel zak-pawel Apr 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@powersj For both plugins default was set to 500MB.
It will not change anything for users operating on payload of smaller size.

Users with bigger payload will need to set larger value in config. We may call it "breaking users" or "fixing security gaps" ;)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok we chatted and agree with the new default, we will want to call this out in change log for the next release

Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zak-pawel I like the approach. I do have some more suggestions though... Sorry!

internal/content_coding.go Outdated Show resolved Hide resolved
internal/content_coding.go Outdated Show resolved Hide resolved
internal/content_coding_test.go Outdated Show resolved Hide resolved
plugins/inputs/amqp_consumer/README.md Outdated Show resolved Hide resolved
plugins/inputs/socket_listener/socket_listener.go Outdated Show resolved Hide resolved
@zak-pawel
Copy link
Collaborator Author

@zak-pawel I like the approach. I do have some more suggestions though... Sorry!

@srebhan All addressed.

@telegraf-tiger
Copy link
Contributor

Copy link
Member

@srebhan srebhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work @zak-pawel! Thank you for addressing this issue!

@srebhan srebhan merged commit ba16eeb into influxdata:master Apr 14, 2023
powersj pushed a commit that referenced this pull request Apr 24, 2023
Co-authored-by: Pawel Zak <Pawel Zak>
(cherry picked from commit ba16eeb)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore linter ready for final review This pull request has been reviewed and/or tested by multiple users and is ready for a final review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Linter: gosec, Rule: G110 - Potential DoS vulnerability via decompression bomb. Should we enable it?
3 participants