-
Notifications
You must be signed in to change notification settings - Fork 181
Support for private networks #120
Comments
Not that this is helpful to getting it done, but this is the only thing keeping me from using vagrant-lxc right now. Really excited for this to get in there. |
@stormbrew no worries! Networking "stuff" will probably be my next focus once 0.5.0 is out ;) |
@stormbrew 0.5.0 is out :) So, regarding this feature, I'm thinking about how we should implement it and I'd love a second opinion on something related to this. I'm not sure you know but Vagrant configures an additional network interface for VBox hosts besides the one with the IP you've specified. Do you think we should have that same behavior here or should it just mean that we'll set I'm no networking expert here but I'm up for implementing the support for it. So I'd love some feedback about this :) /cc @rcarmo |
The best thing would be for it to be as much like existing (VBox) vagrant setups as possible. So eth0 inside the container is either NAT or bridge (or even, if my understanding of what's possible with lxc is correct, the exact same interface as outside) and then each config.network creates another interface as specified (with private being bridged such that other containers on the same subnet can communicate with it). |
@stormbrew tks for the input. regarding the default regarding the private interfaces configuration and bridge setup, what you are saying is that we could set up another bridge per subnet and manually configure the interfaces? would you be able to come up with a bash script that does that set up on the host and on the container so that we can find our way of making it happen from the plugin? If you need a sandbox to try things out you might want to use one of the boxes available at https://github.com/fgrehm/vagrant-lxc-vbox-hosts ;) |
I did that for Debian yesterday - i.e., mimicking the default Ubuntu setup on a Debian Wheezy production server. It boils down to adding this in /etc/network/interfaces:
...and setting up dnsmasq with a DHCP server on the 10.0.3.0/24 range:
I'd really like to stick to lxcbr0, because the name makes it pretty obvious and I'd like Vagrant containers to interact with "normal" LXC ones (for instance, I keep Varnish and Redis in a standard container for re-use - no point in setting them up in every Vagrant environment). Honestly, I prefer NAT to anything else, largely because bridging exposes the containers too much but also because of a number of practical requirements. Pure bridging is OK if you're on a managed production network or developing on a home LAN with only your cat and a set top box for company, but on a more exposed environment it can be a right pain - largely because it's anybody's guess as to what IP address a bridged container will pick up from a network and any "ease of use" goes right out the window. Also, bridging used to fail miserably on Wi-Fi due to the way the network and access control mechanisms work (in 802.1x networks such as those used in companies and universities you can only have ONE physical address on your interface, period). So NAT is pretty much the only sensible solution if you're using a laptop. And, let's face it, who isn't? Not to mention that bridging and letting people hard-code their IP addresses in LXC or OS configs is a sure recipe for IP conflicts. NAT keeps containers off the LAN in case you make a mistake (if you will). R. On 05/08/2013, at 18:15, Fabio Rehm notifications@github.com wrote:
|
I think the private network should work this way (pretty similar to the vbox provider with the host only interfaces):
Doing this manually works correctly, the vms can ping each other on their eth1s. They can ping the host and the host can ping them too. |
@oker1 tks the the input, I like the idea :) I'll be looking into hashicorp/vagrant#2005 any time "soon", but since 1.3.0 will probably going to take a while to be released I think we'll need to find our way to make it happen without host capabilities if we want to have this working with 1.1+. |
Cool, one more addition: when a vm with private network was shut down, the provider should check the bridge belonging to the network and if it has no interfaces attached to it, the bridge should be downed, so we dont leave unused bridges on the host. It has no interfaces attached if /sys/class/net/${LXC_BRIDGE}/brif/ directory is empty. |
What is wrong with keeping unused bridge ready to attach new device on btw. lxc is not virtualisation, so there is no vm. 2013/8/30, Zsolt Takács notifications@github.com:
|
I think it's tidier not to leave unused interfaces around. Bringing it up does not take significant time. But you are right, it's not crucial. |
I wonder if taking it down may interfere with services bound to the bridge's ip (explicitly, as opposed to INADDR_ANY) between restarts of a container. That would be undesirable. |
tks guys! as I said my networking skills are pretty limited and right now I don't have a need for this. Whatever you guys think is best I'm up for implementing it. If someone is able to send a PR I'll be more that happy to merge it in ;) |
http://sysadminandnetworking.blogspot.com/2013/09/set-ip-on-vagrant-lxc-vm.html Please, feel free to ask me networking questions, I know a bit. And, can research quick if I don't have direct experience. |
@pleddy cool, I need to fix a couple of bugs related to 0.6.0 and I'll look into networking related stuff after that :) |
👍 |
1 similar comment
👍 |
I've asked this for some of you on twitter but is anyone familiar with pipework? Does anyone think we could use it to get this implemented? |
I haven't use vagrant-lxc yet, but the piperwork looks awesome will play over the weekend to see what can be done.. On Thu, Mar 6, 2014 at 9:43 AM, Fabio Rehm notifications@github.com wrote:
|
I gave a shot at using pipework and it works! Here's what it would take to create a private network for a container for future reference: pipework br1 <container-id> 192.168.1.1/24
ip addr add 192.168.1.254/24 dev br1 |
Looks good! I had a look at Vagrant + VirtualBox, and it seems like it does a similar thing on the host. It always creates a /24 network for the given private IP. Since pipework was originally written for use with Docker, I suggest to use only the necessary commands and include them directly in vagrant-lxc... Additionally, during shutdown, I suggest to run |
Oh, and I suggest to use a meaningful name for the bridge interface, e.g. "vagrant" rather than "br1". |
@stucki thanks for the feedback! As per getting rid of Docker's related code from In regard to And for the bridge name, I'm actually planning to make that configurable but defaulting it to As I said many times, I'm not a networking guru, so please correct me if I'm wrong about anything, this is the best time to 👊 me before the feature ends up on master 😃 |
You can easily find out if the bridge is still used by checking if it has any interfaces attached to it using Bridge name: As it looks to me, VirtualBox is creating Regarding the integration of While we're at it, make sure to add a note that the whole pipework stuff depends on the "brctl" utility. On Ubuntu it is the bridge-utils package which includes that. Greetings, and thanks for your amazing work! |
Awesome! Thanks for sharing!
Sure!
Thank you for helping out :-) |
bump! Apologies for nagging, is this still being worked on? I'm currently implementing this manually but wondering if it's worth waiting for this to be included before doing the same to all my Vagrantfiles. Thanks 😺 |
This is not being worked on right now but will be a feature of the first 1.0.0 beta when that comes out ;) |
+1, this would be a really useful addition. |
@cbanbury can you show how you are implementing this manually? |
@bpaz sorry for not replying sooner, I don't know if this is still any use to you but here's what I'm doing:
|
Experimental support for this has landed on git and will be part of 1.1.0 🎆 More info on #298 (comment) |
The text was updated successfully, but these errors were encountered: