Skip to content
This repository has been archived by the owner on Nov 24, 2022. It is now read-only.

Support for private networks #120

Closed
fgrehm opened this issue Jul 29, 2013 · 31 comments
Closed

Support for private networks #120

fgrehm opened this issue Jul 29, 2013 · 31 comments

Comments

@fgrehm
Copy link
Owner

fgrehm commented Jul 29, 2013

@stormbrew
Copy link

Not that this is helpful to getting it done, but this is the only thing keeping me from using vagrant-lxc right now. Really excited for this to get in there.

@fgrehm
Copy link
Owner Author

fgrehm commented Jul 31, 2013

@stormbrew no worries! Networking "stuff" will probably be my next focus once 0.5.0 is out ;)

@fgrehm
Copy link
Owner Author

fgrehm commented Aug 2, 2013

@stormbrew 0.5.0 is out :)

So, regarding this feature, I'm thinking about how we should implement it and I'd love a second opinion on something related to this. I'm not sure you know but Vagrant configures an additional network interface for VBox hosts besides the one with the IP you've specified.

Do you think we should have that same behavior here or should it just mean that we'll set lxc.network.ipv4 when starting the container? Another idea would be to use @paneq's approach of writing to /etc/network/interfaces.

I'm no networking expert here but I'm up for implementing the support for it. So I'd love some feedback about this :)

/cc @rcarmo

@stormbrew
Copy link

The best thing would be for it to be as much like existing (VBox) vagrant setups as possible. So eth0 inside the container is either NAT or bridge (or even, if my understanding of what's possible with lxc is correct, the exact same interface as outside) and then each config.network creates another interface as specified (with private being bridged such that other containers on the same subnet can communicate with it).

@fgrehm
Copy link
Owner Author

fgrehm commented Aug 5, 2013

@stormbrew tks for the input.

regarding the default eth0, right now the "official" boxes link eth0 to lxcbr0 as it is the default bridge that comes with Ubuntu's lxc package and (if I'm not mistaken) is NATted. That is nice for Ubuntu users but it kinda sucks as other distros don't come with that by default. I'm planning to drop that requirement and / or add support for setting up and start using a vagrantlxcbr automagically once I'm able to deal with hashicorp/vagrant#2005. this is something I want to see happening before our 1.0

regarding the private interfaces configuration and bridge setup, what you are saying is that we could set up another bridge per subnet and manually configure the interfaces? would you be able to come up with a bash script that does that set up on the host and on the container so that we can find our way of making it happen from the plugin? If you need a sandbox to try things out you might want to use one of the boxes available at https://github.com/fgrehm/vagrant-lxc-vbox-hosts ;)

@rcarmo
Copy link

rcarmo commented Aug 6, 2013

I did that for Debian yesterday - i.e., mimicking the default Ubuntu setup on a Debian Wheezy production server.

It boils down to adding this in /etc/network/interfaces:

auto lxcbr0
iface lxcbr0 inet static
    pre-up iptables -A POSTROUTING -o eth0 -j MASQUERADE
    max_wait 0
    bridge_ports dummy0
    address 10.0.3.1
    netmask 255.255.255.0
    broadcast 10.0.3.255
    dns-nameservers 10.0.3.1
    dns-search local

...and setting up dnsmasq with a DHCP server on the 10.0.3.0/24 range:

$ cat /etc/dnsmasq.conf | grep 10.0 
dhcp-range=10.0.3.10,10.0.3.250,255.255.255.0,12h

I'd really like to stick to lxcbr0, because the name makes it pretty obvious and I'd like Vagrant containers to interact with "normal" LXC ones (for instance, I keep Varnish and Redis in a standard container for re-use - no point in setting them up in every Vagrant environment).

Honestly, I prefer NAT to anything else, largely because bridging exposes the containers too much but also because of a number of practical requirements.

Pure bridging is OK if you're on a managed production network or developing on a home LAN with only your cat and a set top box for company, but on a more exposed environment it can be a right pain - largely because it's anybody's guess as to what IP address a bridged container will pick up from a network and any "ease of use" goes right out the window.

Also, bridging used to fail miserably on Wi-Fi due to the way the network and access control mechanisms work (in 802.1x networks such as those used in companies and universities you can only have ONE physical address on your interface, period).

So NAT is pretty much the only sensible solution if you're using a laptop. And, let's face it, who isn't?

Not to mention that bridging and letting people hard-code their IP addresses in LXC or OS configs is a sure recipe for IP conflicts. NAT keeps containers off the LAN in case you make a mistake (if you will).

R.

On 05/08/2013, at 18:15, Fabio Rehm notifications@github.com wrote:

@stormbrew tks for the input.

regarding the default eth0, right now the "official" boxes link eth0 to lxcbr0 as it is the default bridge that comes with Ubuntu's lxc package and (if I'm not mistaken) is NATted. That is nice for Ubuntu users but it kinda sucks as other distros don't come with that by default. I'm planning to drop that requirement and / or add support for setting up and start using a vagrantlxcbr automagically once I'm able to deal with hashicorp/vagrant#2005. this is something I want to see happening before our 1.0

regarding the private interfaces configuration and bridge setup, what you are saying is that we could set up another bridge per subnet and manually configure the interfaces? would you be able to come up with a bash script that does that set up on the host and on the container so that we can find our way of making it happen from the plugin? If you need a sandbox to try things out you might want to use one of the boxes available at https://github.com/fgrehm/vagrant-lxc-vbox-hosts ;)


Reply to this email directly or view it on GitHub.

@oker1
Copy link

oker1 commented Aug 27, 2013

I think the private network should work this way (pretty similar to the vbox provider with the host only interfaces):

  1. list all lxcbr(\d+) interfaces where $1 > 0
    • if any of them has the same subnet as the private ip choose it
    • otherwise create a bridge with brctl addbr lxcbrX where X is the next free number, then ifconfig lxcbrX x.y.z.1 255.255.255.0 up
  2. add interface to lxc template like this:
      lxc.customize "network.type", "veth"
      lxc.customize "network.flags", "up"
      lxc.customize "network.link", "lxcbrX"
      lxc.customize "network.ipv4", "ip/24"

Doing this manually works correctly, the vms can ping each other on their eth1s. They can ping the host and the host can ping them too.

@fgrehm
Copy link
Owner Author

fgrehm commented Aug 30, 2013

@oker1 tks the the input, I like the idea :) I'll be looking into hashicorp/vagrant#2005 any time "soon", but since 1.3.0 will probably going to take a while to be released I think we'll need to find our way to make it happen without host capabilities if we want to have this working with 1.1+.
I'll have a look into this when I have a chance and I'd be really happy if someone is able to send a PR ;)

@oker1
Copy link

oker1 commented Aug 30, 2013

Cool, one more addition: when a vm with private network was shut down, the provider should check the bridge belonging to the network and if it has no interfaces attached to it, the bridge should be downed, so we dont leave unused bridges on the host. It has no interfaces attached if /sys/class/net/${LXC_BRIDGE}/brif/ directory is empty.

@jellonek
Copy link

What is wrong with keeping unused bridge ready to attach new device on
start of next container?

btw. lxc is not virtualisation, so there is no vm.

2013/8/30, Zsolt Takács notifications@github.com:

Cool, one more addition: when a vm with private network was shut down, the
provider should check the bridge belonging to the network and if it has no
interfaces attached to it, the bridge should be downed, so we dont leave
unused bridges on the host. It has no interfaces attached if
/sys/class/net/${LXC_BRIDGE}/brif/ directory is empty.


Reply to this email directly or view it on GitHub:
#120 (comment)

@oker1
Copy link

oker1 commented Aug 30, 2013

I think it's tidier not to leave unused interfaces around. Bringing it up does not take significant time. But you are right, it's not crucial.

@stormbrew
Copy link

I wonder if taking it down may interfere with services bound to the bridge's ip (explicitly, as opposed to INADDR_ANY) between restarts of a container. That would be undesirable.

@fgrehm
Copy link
Owner Author

fgrehm commented Sep 5, 2013

tks guys! as I said my networking skills are pretty limited and right now I don't have a need for this. Whatever you guys think is best I'm up for implementing it. If someone is able to send a PR I'll be more that happy to merge it in ;)

@pleddy
Copy link

pleddy commented Sep 23, 2013

http://sysadminandnetworking.blogspot.com/2013/09/set-ip-on-vagrant-lxc-vm.html

Please, feel free to ask me networking questions, I know a bit. And, can research quick if I don't have direct experience.

@fgrehm
Copy link
Owner Author

fgrehm commented Sep 27, 2013

@pleddy cool, I need to fix a couple of bugs related to 0.6.0 and I'll look into networking related stuff after that :)

@subnetmarco
Copy link

👍

1 similar comment
@demonkoryu
Copy link

👍

@fgrehm
Copy link
Owner Author

fgrehm commented Mar 5, 2014

I've asked this for some of you on twitter but is anyone familiar with pipework? Does anyone think we could use it to get this implemented?

@kikitux
Copy link

kikitux commented Mar 6, 2014

I haven't use vagrant-lxc yet, but the piperwork looks awesome

will play over the weekend to see what can be done..

On Thu, Mar 6, 2014 at 9:43 AM, Fabio Rehm notifications@github.com wrote:

I've asked this for some of you on twitter but is anyone familiar with
pipework https://github.com/jpetazzo/pipework? Does anyone think we
could use it to get this implemented?

Reply to this email directly or view it on GitHubhttps://github.com//issues/120#issuecomment-36791061
.

@fgrehm fgrehm modified the milestones: v0.9.0, v1.0.0 Mar 11, 2014
@fgrehm
Copy link
Owner Author

fgrehm commented Mar 11, 2014

I gave a shot at using pipework and it works! Here's what it would take to create a private network for a container for future reference:

pipework br1 <container-id> 192.168.1.1/24
ip addr add 192.168.1.254/24 dev br1

@stucki
Copy link
Contributor

stucki commented Mar 11, 2014

Looks good! I had a look at Vagrant + VirtualBox, and it seems like it does a similar thing on the host. It always creates a /24 network for the given private IP.

Since pipework was originally written for use with Docker, I suggest to use only the necessary commands and include them directly in vagrant-lxc...

Additionally, during shutdown, I suggest to run ip link delete br1 in order to clean up again...

@stucki
Copy link
Contributor

stucki commented Mar 11, 2014

Oh, and I suggest to use a meaningful name for the bridge interface, e.g. "vagrant" rather than "br1".

@fgrehm
Copy link
Owner Author

fgrehm commented Mar 11, 2014

@stucki thanks for the feedback!

As per getting rid of Docker's related code from pipework, I'm not sure I'll have the time to get that into 1.0. My plan is to actually not even ship with pipework and ask users to install it by hand on 0.9.0 (which will probably be the 1.0.0 feature without dropping support for vagrant < 1.5) and look into polishing things up afterwards.
The reason behind it is that it is a feature that I'm not going to use on a daily basis and would love some feedback from the community whether it works before going that direction ;-)

In regard to ip link delete br1, I'm not 100% sure we can do that, we might have other vagrant-lxc containers attached to the same bridge on a different Vagrantfile (or even "vanilla" containers) that would stop working. IIRC, (at least) Ubuntu will not keep those bridges around on the host and pipework takes care of "garbage collecting" container's veths. I think that's good enough for now.

And for the bridge name, I'm actually planning to make that configurable but defaulting it to brX as we might end with with many bridges for different networks as pointed out by @oker1 on #120 (comment)

As I said many times, I'm not a networking guru, so please correct me if I'm wrong about anything, this is the best time to 👊 me before the feature ends up on master 😃

@stucki
Copy link
Contributor

stucki commented Mar 11, 2014

You can easily find out if the bridge is still used by checking if it has any interfaces attached to it using brctl show.

Bridge name: As it looks to me, VirtualBox is creating vboxnet<number> interfaces and increments the number for each distinct network. That seems like a good way of dealing with it and can safe another config option...

Regarding the integration of pipework, I think my Ruby skills are not good enough to code that myself. However, I assume that if you get pipework running through an external command, then I would be able to do the rest, so that we don't depend on having pipework installed...

While we're at it, make sure to add a note that the whole pipework stuff depends on the "brctl" utility. On Ubuntu it is the bridge-utils package which includes that.

Greetings, and thanks for your amazing work!

@fgrehm
Copy link
Owner Author

fgrehm commented Mar 11, 2014

You can easily find out if the bridge is still used by checking if it has any interfaces attached to it using brctl show.

Awesome! Thanks for sharing!

While we're at it, make sure to add a note that the whole pipework stuff depends on the "brctl" utility. On Ubuntu it is the bridge-utils package which includes that.

Sure!

Greetings, and thanks for your amazing work!

Thank you for helping out :-)

@cbanbury
Copy link

bump!

Apologies for nagging, is this still being worked on? I'm currently implementing this manually but wondering if it's worth waiting for this to be included before doing the same to all my Vagrantfiles.

Thanks 😺

@fgrehm
Copy link
Owner Author

fgrehm commented Apr 16, 2014

This is not being worked on right now but will be a feature of the first 1.0.0 beta when that comes out ;)

@bbinet
Copy link

bbinet commented Apr 22, 2014

+1, this would be a really useful addition.

@bpaz
Copy link

bpaz commented May 14, 2014

@cbanbury can you show how you are implementing this manually?

@cbanbury
Copy link

cbanbury commented Jun 3, 2014

@bpaz sorry for not replying sooner, I don't know if this is still any use to you but here's what I'm doing:

lxc.customize "network.type", "veth"
lxc.customize "network.link", "lxcbr1"
lxc.customize "network.ipv4", "10.0.3.x/24"

@fgrehm fgrehm mentioned this issue Jun 9, 2014
10 tasks
@PierrePaul PierrePaul mentioned this issue Jul 4, 2014
@fgrehm fgrehm removed this from the v1.0.0 milestone Sep 23, 2014
@fgrehm fgrehm modified the milestone: v1.1.0 Jan 8, 2015
@fgrehm
Copy link
Owner Author

fgrehm commented Jan 11, 2015

Experimental support for this has landed on git and will be part of 1.1.0 🎆

More info on #298 (comment)

@fgrehm fgrehm closed this as completed Jan 11, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests