Memory usage
#2278
Replies: 2 comments
-
|
You'll find good advice in #2253. Additionally, batch processing can be a viable option; consider splitting your 30M line file into smaller chunks. As a last option, try increasing resources. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
For 30 million URLs with httpx, you will need significantly more than 2GB RAM. Here are some strategies to handle this: 1. Split your input filesplit -l 100000 in.txt chunk_
for f in chunk_*; do
./httpx -l "$f" -o "out_$f.txt" -path mypath -mc 200 -ms mystring -s -sd
done
cat out_*.txt > out.txt2. Use streaming mode with rate limiting./httpx -l in.txt -o out.txt -path mypath -mc 200 -ms mystring -s -sd -rate-limit 500 -c 503. Memory estimates
4. Alternative: Stream from stdincat in.txt | ./httpx -o out.txt -path mypath -mc 200 -ms mystring -s -sdThis can help with memory as it processes URLs as they come in rather than loading all at once. The chunked approach (#1) is your best bet for 2GB RAM - process in batches of 100k-500k URLs at a time. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to run httpx 1.7.1 on a linux server with 2GB of RAM using the following command:
The file in.txt contains approximately 30000000 lines. After running for about 40-50 minutes, the memory usage suddenly increased to 80%, and the httpx process was killed. Could you please advise how much memory is required for stable operation?
Beta Was this translation helpful? Give feedback.
All reactions