Description
Below is a screenshot of the past 12 hours which has been a total disaster for us to say the least. After updating to node 4.2 (from 0.10) in production we immediately exceeded all memory quotas and experienced a high volume of timeouts (even with no load and memory under the 1GB limit).
First, I apologize if this is not the place for this. I am happy to move discussion somewhere else and we will help diagnose whatever you guys need. We did this same parade with Node 0.12 and had to downgrade to 0.10.
Second, and I guess the real question here, is Heroku's 512MB of ram simply not enough to run Node 4.x? If that is the case, cool, but memory constraints definitely need to be made more clear.
Timeline:
- Tue, Oct. 13th, at 2pm EST we deployed Node 4.2.0 with cluster enabled running 2 threads. Immediately hit 512 memory limit as seen in the picture below.
- Tue, Oct. 13th, at 2:15pm EST we removed cluster completely. Hit memory limits 30 min later.
- Tue, Oct. 13th, at 4:00pm EST we saw Node 4.2.1 was released, deployed 4.2.1. Continued to hit memory limits
- Tue, Oct. 13th, at 6:00pm EST we doubled memory to 1GB
In general you can see the memory is all over the place, maybe that is expected with newer versions of V8...
Although I don't have a screen, you can see in the first part of the graph running Node 0.10 that it stays almost perfectly flat at 256MB of ram. Under any load, that was consistent.
For reference, here is a load test we did in a dev environment running Node 4.2.1, cluster forked to 4 processes, and about 5k rpms. Also immediately hit the higher 1GB memory limit. We then dropped this down to 2 forked processes with the same result.