Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow setting SNAT address per container #8614

Open
johanehnberg opened this issue Mar 29, 2021 · 9 comments
Open

Allow setting SNAT address per container #8614

johanehnberg opened this issue Mar 29, 2021 · 9 comments
Labels
Documentation Documentation needs updating Feature New feature, not a bug Maybe Undecided whether in scope for the project
Milestone

Comments

@johanehnberg
Copy link
Contributor

This is a feature request to allow setting source IP address on a per-container basis rather than for a whole network.

The whole network setting was implemented in https://github.com/lxc/lxd/issues/5648

Allowing per-container SNAT rules avoids using macvlan, one bridge per IP, a separate public bridge or manual firewall scripts to achieve the same. This is preferable since all of them have drawbacks compared to a single SNAT rule manged by LXD.

Furthermore, it would be a perfect companion for proxy rules which already allow arbitrary listen addresses, making the container's connections to appear from the same address.

Related:
https://discuss.linuxcontainers.org/t/ubuntu-lxd-and-masquerade/5036

@tomponline
Copy link
Member

tomponline commented Mar 29, 2021

Hi @johanehnberg, have you considered using routed NIC type, this would allow you to have the instance NIC be bound with the actual static external IP address you want (which would then be 'published' to the external network via proxy ARP/proxy NDP), without needing to have any additional bridges or SNAT setup. And unlike macvlan, would still allow your instances to communicate with the host, meaning it would be able to reach services on the host (DNS perhaps) as well as other instances connected to lxdbr0.

This would also have the added benefit of avoiding issues with instances not being able to communicate with each other using the external IP, which would be the case of we implemented per-NIC SNAT, as it would require bridge level SNAT (br_netfilter) as well as host-side SNAT.

See https://discuss.linuxcontainers.org/t/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/7280 for examples of using it with cloud-init to specify the static IP config.

@stgraber
Copy link
Contributor

Alternatively a regular bridge combined with specific containers getting an external address through ipv4.routes could work too, but you'll need the addresses or subnets to be routed to the host first.

@johanehnberg
Copy link
Contributor Author

I cannot speak for all use cases in previous discussions, but for us there does not seem to be any alternatives with all the options above. The key aspects are:

  • Uniformly addressing containers regardless of whether they serve networks outside the host
  • Making containers agnostic to the particular network they are in at any given time
  • For IPv4, not burning through public IP's just to get a single port

@tomponline
Copy link
Member

tomponline commented Mar 29, 2021

Snat would not allow you to address containers, that would require a combination of snat and an associated dnat (to forward inbound traffic) Is that what you meant originally?

@tomponline
Copy link
Member

Could you describe the scenario you are trying to configure?

@tomponline
Copy link
Member

Routed nic achieves your first two requirements but does require a whole IP, though it doesn't need to be public.

@tomponline tomponline added the Incomplete Waiting on more information from reporter label Mar 29, 2021
@johanehnberg
Copy link
Contributor Author

johanehnberg commented Mar 29, 2021

OK, here is a bit more about our scenario.

With uniform addressing I mean all containers on a host exist in the same network and do not require routing between themselves. Having two interfaces would get around this but seriously complicates routing in the container. We used to have a global L2 bridge (cross-provider live migration was pretty cool!). In that scenario it had been doable since the internal network was complete without routers. Today, with moving to wireguard and an ever-growing broadcast domain we opted for L3 internal network so two interfaces is clumsy. Keeping it simple is key and if you ask me that is the power of containers. Routed nic does not achieve this.

DNAT we already use - I was personally very delighted when proxy_nat devices were introduce to LXD and I could simplify a lot of our orchestration. In this context, SNAT feels like the missing piece of the puzzle.

The particular use cases in our scenario include:

  • Legacy webapps with built-in email servers => PTR is checked by recipient's host
  • Email servers, as above
  • Customers with dedicated IP addresses who use that IP in a firewall in another system not hosted by us, includes tunnels, sync and backup cases
  • A proprietary protocol of our customer used for a monitoring system that work both outbound and inbound (like Zabbix does)

We are already achieving this in multiple ways. Rather, this feature would allow achieving them in a single and simple way. For example, now we can place one legacy webapp beind a reverse proxy that listens on the host's main IP and set the PTR there. Most of the more customized solutions we manage on the routers or on the host where SNAT rules are trivial. However, I realized that the introduction of per-container SNAT would allow essentially all of the use cases to be covered automatically by our orchestration.

@stgraber
Copy link
Contributor

So should we go with this idea, the obvious restrictions would be:

  • A container will need a fixed ipv4.address in order to use ipv4.nat.address
  • Container to container communication will not be using the SNAT address, it will only be used for traffic leaving the bridge
  • The parent network will need to have ipv4.nat set to true
  • The container nic will have to be of the new network= type and not an older nictype variant (so we have a fixed reference to the managed LXD network backing it)

@stgraber stgraber added Documentation Documentation needs updating Feature New feature, not a bug Maybe Undecided whether in scope for the project and removed Incomplete Waiting on more information from reporter labels Apr 20, 2021
@stgraber stgraber added this to the later milestone Apr 20, 2021
@kamzar1
Copy link

kamzar1 commented Dec 9, 2022

@metoo
This feature, might help having distinct client public IP connected to a container , makes inbound/outbound container activities tracable and allows rDNS identity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Documentation Documentation needs updating Feature New feature, not a bug Maybe Undecided whether in scope for the project
Projects
None yet
Development

No branches or pull requests

4 participants