-
Notifications
You must be signed in to change notification settings - Fork 881
Description
Dear colleagues,
I'm currently dockerizing a traffic analyzer application which I developed in Python and it's already being used in production. I'm almost there, the first step I took was to create the image of my app based on Alpine 3.6. The main dependency for my code to run is the nfdump project (available at https://github.com/phaag/nfdump). This project delivers a collector for Netflow/IPFIX UDP packets called "nfcapd", which runs on port 9995 (properly exposed on my Dockerfile).
The thing is, If I run the container with default network and publishing port 9995, the UDP packets coming from different routers (with different IPs, obviously) all come with the same docker gateway IP (172.18.0.1) -- but I need original source IP address in order to determine from which router the packets belongs to.
So I searched for solutions and stumbled with the --net=host option. If I run the container with run command and passing --net=host, the source IP of the incoming UDP packets is not altered, solving my problem aparently. But then again, the thing is, this container is not running alone in the service. Inside the docker-compose.yml I've created some other services, such as db (InfluxDB) which has to be reachable by the collector container (it collects the packets, process them, and send the results to InfluxDB).
Wrapping it up, If I use the host option for the collector container network, it receives the correct source IP addresses, but in the other hand, it cant communicate to the other containers which runs in the stack. The DNS does not work and I cant resolve the db container name from within the collector container. By obvious reasons I cant rely on IPs, as taught by you, professor, as I expect to run multiple instances of this service to serve different clients from a single machine (swarm is not a requirement in this project).