Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime error: index out of range [4294967295] with length 16 in github.com/klauspost/compress@v1.15.1/flate #630

Open
noxer opened this issue Jun 23, 2022 · 14 comments

Comments

@noxer
Copy link

noxer commented Jun 23, 2022

We've seen a sentry crash in version 1.15.1 of klauspost/compress. The code seems to be essentially the same on master as on v1.15.1 and the history doesn't indicate a fix for this, so I'm opening a new issue.

runtime.boundsError: runtime error: index out of range [4294967295] with length 16
  File "github.com/klauspost/compress@v1.15.1/flate/huffman_code.go", line 270, in (*huffmanEncoder).bitCounts
  File "github.com/klauspost/compress@v1.15.1/flate/huffman_code.go", line 351, in (*huffmanEncoder).generate
  File "github.com/klauspost/compress@v1.15.1/flate/huffman_bit_writer.go", line 823, in (*huffmanBitWriter).generate
  File "github.com/klauspost/compress@v1.15.1/flate/huffman_bit_writer.go", line 722, in (*huffmanBitWriter).writeBlockDynamic
  File "github.com/klauspost/compress@v1.15.1/flate/deflate.go", line 688, in (*compressor).storeFast
  File "github.com/klauspost/compress@v1.15.1/flate/deflate.go", line 704, in (*compressor).write
  File "github.com/klauspost/compress@v1.15.1/flate/deflate.go", line 865, in (*Writer).Write
  File "github.com/klauspost/compress@v1.15.1/gzip/gzip.go", line 212, in (*Writer).Write
  File "github.com/mattermost/gziphandler@v0.0.1/gzip.go", line 104, in (*GzipResponseWriter).Write
  File "io/io.go", line 428, in copyBuffer
  File "io/io.go", line 385, in Copy
  File "io/io.go", line 361, in CopyN
  File "net/http/fs.go", line 337, in serveContent
  File "net/http/fs.go", line 664, in serveFile
  File "net/http/fs.go", line 850, in (*fileHandler).ServeHTTP
  File "net/http/server.go", line 2127, in StripPrefix.func1
  File "net/http/server.go", line 2084, in HandlerFunc.ServeHTTP
  File "github.com/mattermost/mattermost-server/v6/web/static.go", line 91, in staticFilesHandler.func1
  File "net/http/server.go", line 2084, in HandlerFunc.ServeHTTP
  File "github.com/mattermost/gziphandler@v0.0.1/gzip.go", line 337, in GzipHandlerWithOpts.func1.1
  File "net/http/server.go", line 2084, in HandlerFunc.ServeHTTP
  File "github.com/gorilla/mux@v1.8.0/mux.go", line 210, in (*Router).ServeHTTP
  File "net/http/server.go", line 2084, in HandlerFunc.ServeHTTP
  File "net/http/server.go", line 2916, in serverHandler.ServeHTTP
  File "net/http/server.go", line 1966, in (*conn).serve

OS / Arch

Linux / AMD64

Go version

1.18.1

@noxer
Copy link
Author

noxer commented Jun 23, 2022

Problem may be on our side. Sorry. Will reopen when we've investigated further and it's not on our side after all.

@noxer noxer closed this as not planned Won't fix, can't repro, duplicate, stale Jun 23, 2022
@klauspost
Copy link
Owner

👍🏼 It could be more than one goroutine writing at once. Let me know what you find.

@noxer
Copy link
Author

noxer commented Jun 23, 2022

Will do, thanks.

@hanzei
Copy link

hanzei commented Aug 17, 2023

Heads up that this is still happening with v1.15.4:

runtime.boundsError: runtime error: index out of range [4294967295] with length 16
  File "github.com/klauspost/compress@v1.15.14/flate/huffman_code.go", line 287, in (*huffmanEncoder).bitCounts
  File "github.com/klauspost/compress@v1.15.14/flate/huffman_code.go", line 368, in (*huffmanEncoder).generate
  File "github.com/klauspost/compress@v1.15.14/flate/huffman_bit_writer.go", line 825, in (*huffmanBitWriter).generate
  File "github.com/klauspost/compress@v1.15.14/flate/huffman_bit_writer.go", line 722, in (*huffmanBitWriter).writeBlockDynamic
  File "github.com/klauspost/compress@v1.15.14/flate/deflate.go", line 688, in (*compressor).storeFast
  File "github.com/klauspost/compress@v1.15.14/flate/deflate.go", line 704, in (*compressor).write
  File "github.com/klauspost/compress@v1.15.14/flate/deflate.go", line 864, in (*Writer).Write
  File "github.com/klauspost/compress@v1.15.14/gzip/gzip.go", line 212, in (*Writer).Write
  File "github.com/mattermost/gziphandler@v0.0.1/gzip.go", line 104, in (*GzipResponseWriter).Write

@klauspost
Copy link
Owner

@hanzei Upgrade and (more likely) make sure you don't do concurrent writes.

@hanzei
Copy link

hanzei commented Aug 17, 2023

We use compress as part of our gziphandler. AFAIK there should be no concurrent writes, as it's just a stack of http.Handlers.

@klauspost
Copy link
Owner

You can easily do concurrent writes to a ResponseWriter.

Also check out https://github.com/klauspost/compress/tree/master/gzhttp#gzip-middleware

@klauspost
Copy link
Owner

There are literally millions of uses of this per day, so the chance of you doing concurrent writes is a bit bigger than an unseen bug, that hasn't been caught or shown up in fuzz testing.

@hanzei
Copy link

hanzei commented Aug 17, 2023

Thanks for the swift response. 👍

@agnivade
Copy link

Hey @klauspost - apologies for commenting on a closed issue. But I took a good look at the code from our side. Atleast the staticFilesHandler from where these panics are originating. And I could not see anything wrong.

Looking a bit closely at the huffman_code.go file however, I see that the panic is happening from:

} else {
	// If we stole from below, move down temporarily to replenish it.
	for levels[level-1].needed > 0 { // <--- here
		level--
	}
}

And every time, it's the same error runtime error: index out of range [4294967295] with length 16, which points to an integer overflow issue.

I am wondering if there could be a case where all the needed fields are greater than zero, and this for loop gets stuck in this, which makes level reach 0, and therefore cause level - 1 to crash? Both the levels and level variable are strictly local, so from my very brief look it doesn't look like memory corruption due to concurrent writes.

I have also been running some tests with the binary running in -race mode and that does not show up anything as well.

Curious to know your thoughts on this.

@klauspost
Copy link
Owner

Without a reproducer it is quite hard to get further. Would it be feasible to wrap the encoder in a defer func() { if r == recover(); r != nil {... dump file... }} or similar?

@klauspost klauspost reopened this Aug 5, 2024
@agnivade
Copy link

agnivade commented Aug 5, 2024

Without a reproducer it is quite hard to get further.

Couldn't agree more.

Would it be feasible to wrap the encoder in a defer func() { if r == recover(); r != nil {... dump file... }} or similar?

The issue is that, there's no way for us to actually get the dump file back from people running into this. It's just that we get the stack trace in the Sentry dashboard. And that's about the only info we have. There is no way for us to reach out to the affected people.

I apologize for not able to give you anything more solid than this. I spent quite a few hours on this trying to get it to repro but failed to. But we do keep seeing this crash from time to time. :(

@klauspost
Copy link
Owner

I know the issue. My hunch is that if this is triggered, simply slapping a "if level == 0 { something}" will just cause something else to crash - or worse - be wrong. I will see what I can cook up.

@agnivade
Copy link

agnivade commented Aug 5, 2024

Thanks. I will leave it to your best judgement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants