-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeing memory leak under normal usage. #92
Comments
Interesting. And just to ask a dumb question which you sort of already answered, but I'll ask anyway... you're not using the in-memory cache right? (by passing |
We didn't pass any additional caching options, Memory is default right? Anyway, we're only pulling the same image with the same parameters, I'd expect the in-memory cache to fill up if I were requesting many different images, but this is just the same image over and over. |
Hi, We saw the image was properly saved to cache (tested on S3 and file systems) but it was not loaded. For every reload of the image the image was resaved to cache. After those tests, we configured nginx proxy cache, and the server went from beeing constantly overwhelmed and killed twice a day to almost zero utilization. Weird... :) |
Default is no caching at all. Based on @Shimmi's additional info, this definitely sounds like an issue. I'll try to look into it. |
okay, I'm pretty sure I've figured out what is going on here. The short answer is that transformation was being performed on already cached images, resulting in a bunch of extra memory allocations and CPU usage. A fix will be pushed shortly. A slightly longer explanation is below. HTTP CachingFirst, some background which you probably already know. There are two main ways of doing caching in HTTP. The The httpcacheImageproxy uses the gregjones/httpcache package to handle caching. For connections between imageproxy and remote servers, httpcache takes care of everything for us, including both flavors of caching mentioned above. It enforces all the right validation checks, sends etags to the server when needed, etc. If the server responds with a 304, then httpcache will return its cached copy with a 200 status. Unless you inspect the For "downstream" connections (those between the browser and imageproxy), we reuse the same caching headers as the upstream resource. That is, we use the same values for So, for a remote image (like this one) that serves an
Steps 1-3 still need to happen to make sure the remote image hasn't changed, but steps 4 and 5 should be flipped. That was just an oversight on my part when I originally implemented this and never noticed. We were still returning a 304 response, so from the client's perspective everything looks fine. But inside the server, we're doing a transformation in step 4 that isn't needed. Anyway, like I said... fix will be pushed shortly. I'll leave this bug open until you confirm that the new version seems to have fixed the problem for you. |
oh, also meant to add that I was able to easily replicate imageproxy using hundreds of megs of memory when requesting the same cached image thousands of times (rakyll/hey is great for testing this by the way). After the forthcoming fix, it stayed constant at 65mb even with 20,000 requests. |
If the caching headers in the request are valid, return a 304 response instead of doing the transformation. Ref #92
@willnorris Many thanks! That was fast :) I've tested it and can confirm the memory consumption is OK for me. But we are still facing big CPU load and the images beeing re-modified (tested with the file cache). Hitting F5 for the same URL image for 30x times resulted in high CPU load: Also the file in file cache is beeing modified with every new request: Behaviour, I would expect is if the source image or URL parameters does not change, the image proxy should simply take the image from the cache and do not touch it again. Not sure what causes this... |
okay, I'm going to close this as having fixed the original reported problem, the memory leak. Please open a new issue to focus on the high CPU load, and I'll try to investigate. As for the cached file being rewritten, that will end up needing to be fixed in the httpcache package. I'd suggest opening a bug there, and I'll try and take a look at what would be involved in fixing it. |
Hi, we're using image proxy in a docker container, running on our on-premise Kubernetes environment.
When trying it out in a load test for the same image, we see memory use going up, and then our system killing the process eventually when it takes up too much memory. This happens very quickly when we have somewhat of a reasonable load on it. Our docker image is practically the same as https://hub.docker.com/r/willnorris/imageproxy/~/dockerfile/ except that we start it with a domain whitelist. Our configuration with which it is started is
CMD /go/bin/imageproxy -addr 0.0.0.0:80 -whitelist (... list of our domains )
We simply
GET
/400x,q80/https://cmgtcontent.ahold.com.kpnis.nl/cmgtcontent/media//001746500/000/001746560_001_superhero_BBQ_170523_(1).jpg
. After about 20_000 requests the docker instance reaches its memory limit, after which it is killed.Under no usage, it stays at its normal ~30 MB.
The text was updated successfully, but these errors were encountered: