Skip to content
This repository has been archived by the owner on Nov 24, 2022. It is now read-only.

Can't start VM on Fedora 19 #113

Closed
ku1ik opened this issue Jul 16, 2013 · 26 comments
Closed

Can't start VM on Fedora 19 #113

ku1ik opened this issue Jul 16, 2013 · 26 comments

Comments

@ku1ik
Copy link

ku1ik commented Jul 16, 2013

So I switched to Fedora (19) and I tried to use raring64 lxc box. This is what I got when ran vagrant up:

~/scratch % vagrant up --provider=lxc
Bringing machine 'default' up with 'lxc' provider...
[default] Importing base box 'raring64'...
[default] Setting up mount entries for shared folders...
[default] -- /vagrant
[default] Starting container...
/home/kill/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/driver/cli.rb:101:in `transition_to': Target state 'running' not reached, currently on 'stopped' (Vagrant::LXC::Driver::CLI::Tar
getStateNotReached)
        from /home/kill/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/driver.rb:69:in `start'
        from /home/kill/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/boot.rb:15:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /home/kill/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/forward_ports.rb:14:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/set_hostname.rb:16:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /home/kill/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/share_folders.rb:13:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/env_set.rb:19:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/provision.rb:45:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:51:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/call.rb:57:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/warden.rb:34:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/builder.rb:116:in `call'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `block in run'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/util/busy.rb:19:in `busy'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/action/runner.rb:61:in `run'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/machine.rb:147:in `action'
        from /opt/vagrant/embedded/gems/gems/vagrant-1.2.3/lib/vagrant/batch_action.rb:63:in `block (2 levels) in run

Where should I look to give you more information?

@ku1ik
Copy link
Author

ku1ik commented Jul 16, 2013

More info:

When I tried vagrant up --provider=lxc for the second time my laptop hanged (hardware power button only helped). I checked again and it was the same. So it seems the first run is not "clean", leaving some state (or process) in the system that causes the laptop to hang on the second run.

@fgrehm
Copy link
Owner

fgrehm commented Jul 16, 2013

@sickill do you mind getting hold of a Vagrant / VBox VM so that we have a common ground for reproducing and discussing the issues? ;)

@ku1ik
Copy link
Author

ku1ik commented Jul 16, 2013

@fgrehm sure. Do you have similar script for setting up vagrant/vagrant-lxc on Fedora VM like you had for Ubuntu one?

@fgrehm
Copy link
Owner

fgrehm commented Jul 16, 2013

@sickill unfortunately not, I actually never used Fedora before =/ can you gist me the steps to install vagrant and the LXC packages there? I might be able to get to this tonight before releasing 0.4.0 ;)

@fgrehm
Copy link
Owner

fgrehm commented Jul 17, 2013

@sickill I was able to reproduce an error but I'm not sure if it is the same as you are experiencing over there. Here's a fedora 19 vagrant VBox VM ready for vagrant-lxc usage and this is my debugging session.
if you pay attention to the last lines, you'll notice that the problem is related to sysfs. The VBox VM didn't crash but running a lxc-destroy against the vagrant-lxc created container gives me some trouble as well.
do you have any clue of what might be happening?

@adam-stokes
Copy link
Contributor

I can verify the problem is reproducible on a Precise Host using the following versions

Vagrant 1.2.4
lxc 0.7.5-3ubuntu67
kernel version 3.2.0-45-generic \#70

Tested on current base boxes found at the wiki page, Precise/Quantal/Raring all experience the above error. My current vagrant file:

# -*- mode: ruby -*-
# vi: set ft=ruby :
#
# Enable guestadditions with: vagrant plugin install vagrant-vbguest
# Enable lxc with: vagrant plugin install vagrant-lxc
#
# To CHANGE the golden image: sudo schroot -c precise-amd64-source -u root
# To ENTER an image snapshot: schroot -c precise-amd64
# To BUILD within a snapshot: sbuild -A -d precise-amd64 PACKAGE*.dsc
path = File.expand_path(File.join(File.dirname(__FILE__), 'lib'))
$LOAD_PATH << path

Vagrant.require_plugin('vagrant-sbuild')

Vagrant.configure("2") do |config|
  # Set to true if you wish to have GuestAdditions updated for cloud image
  config.vbguest.auto_update = false

  # Every Vagrant virtual environment requires a box to build off of.
  # VirtualBox
  # config.vm.box = "precise64"
  # config.vm.box_url = "http://goo.gl/xZ19a"
  # LXC
  config.vm.box = "lxcraring64"
  config.vm.box_url = "http://goo.gl/HMCKT"

  # config.vm.network :hostonly, "192.168.33.10"
  # config.vm.network :bridged
  # config.vm.network :forwarded_port, guest: 80, host: 8080

  # Experimental: (aka doesnt work yet)
  # Configure where you'd like successful sbuild packages to be
  config.vm.synced_folder "scratch", "/home/vagrant/ubuntu/scratch"
  # Share logs and repo from host machine so you can easily get to the
  # builds done on the vagrant box
  config.vm.synced_folder "logs", "/home/vagrant/ubuntu/logs"
  config.vm.synced_folder "repo", "/home/vagrant/ubuntu/repo"

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "puppet/manifests"
    puppet.module_path = "puppet/modules"
    puppet.manifest_file  = "init.pp"
    # Uncomment for extended information
    # puppet.options="--verbose --debug"
    puppet.facter = {
      "debemail" => ENV['DEBEMAIL'] || "Rod Piper <wwf@4life.com>",
      "debsign_key" => ENV['DEBSIGN_KEYID'] || "123456",
    }
   end

  # Configure max cpus
  config.vm.provider :virtualbox do |vb|
     vb.customize ["modifyvm", :id, "--cpus",
      `awk "/^processor/ {++n} END {print n}" /proc/cpuinfo 2> /dev/null || sh -c 'sysctl hw.logicalcpu 2> /dev/null || echo ": 2"' | awk \'{print \$2}\' `.chomp ]
  end
  config.vm.provider :lxc do |lxc|
    lxc.customize 'cgroup.memory.limit_in_bytes', '1024'
  end
end

Output from vagrant up --provider=lxc

-> % vagrant up --provider=lxc
Bringing machine 'default' up with 'lxc' provider...
[default] Importing base box 'lxcraring64'...
[default] Setting up mount entries for shared folders...
[default] -- /vagrant
[default] -- /home/vagrant/ubuntu/scratch
[default] -- /home/vagrant/ubuntu/logs
[default] -- /home/vagrant/ubuntu/repo
[default] -- /tmp/vagrant-puppet/manifests
[default] -- /tmp/vagrant-puppet/modules-0
[default] Starting container...
/home/zef/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/driver/cli.rb:101:in `transition_to': Target state 'running' not reached, currently on 'stopped' (Vagrant::LXC::Driver::CLI::TargetStateNotReached)
    from /home/zef/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/driver.rb:69:in `start'
    from /home/zef/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/boot.rb:15:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /home/zef/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/forward_ports.rb:14:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/set_hostname.rb:16:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /home/zef/.vagrant.d/gems/gems/vagrant-lxc-0.3.4/lib/vagrant-lxc/action/share_folders.rb:13:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/env_set.rb:19:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/provision.rb:45:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/call.rb:51:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/call.rb:57:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/warden.rb:34:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/builder.rb:116:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/runner.rb:61:in `block in run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/util/busy.rb:19:in `busy'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/action/runner.rb:61:in `run'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/machine.rb:147:in `action'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.2.4/lib/vagrant/batch_action.rb:63:in `block (2 levels) in run'

My ifconfig output for lxcbr0

lxcbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::64ea:94ff:fec5:1455/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:3478974 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9747488 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:306221823 (306.2 MB)  TX bytes:12656743482 (12.6 GB)

Ill try to dig more into this problem tonight but putting this up here for reference

@fgrehm
Copy link
Owner

fgrehm commented Jul 18, 2013

@battlemidget tks for the info, I'll try it out later on but it's really weird since I haven't had an issue with precise hosts in a while and I always run the sanity check tests from a precise VBox VM before releasing new boxes

one thing you can try out to debug is manually run lxc-start -n cat .vagrant/machines/default/lxc/id`` and check the output (I'll actually add that to the troubleshoting section of the wiki :)

@adam-stokes
Copy link
Contributor

one thing you can try out to debug is manually run lxc-start -n cat .vagrant/machines/default/lxc/id and check the output (I'll actually add that to the troubleshoting section of the wiki :)

Running this command as a normal user results in a permission denied but when run under sudo the container starts up without a problem.

So that makes me wonder if @fgrehm your setup uses sudo anywhere during the container creation? Using the defaults for a vagrant box under virtualbox doesn't require the use of sudo so maybe im missing a configuration option in my vagrantfile?

I just double checked your sanity tests and noticed everything is run with sudo, so this is probably the issue we are seeing as well.

@fgrehm second question: do you run your containers with apparmor enabled? The default profile puts both lxc-start and lxc-container-default into enforce mode on 12.04 which I believe is where the problem is

@fgrehm
Copy link
Owner

fgrehm commented Jul 19, 2013

Running this command as a normal user results in a permission denied but when run under sudo the container starts up without a problem.

Sorry, I missed out the sudo but yeah, I also have to use it down here on my laptop.

So that makes me wonder if @fgrehm your setup uses sudo anywhere during the container creation? Using the defaults for a vagrant box under virtualbox doesn't require the use of sudo so maybe im missing a configuration option in my vagrantfile?

There is no sudo configuration involved on the Vagrantfile, in fact I'm starting to get tired of typing in my password all the time on my machine and might get to #90 ealier :P

I just double checked your sanity tests and noticed everything is run with sudo, so this is probably the issue we are seeing as well

I'm not sure about that but a gist with the output of VAGRANT_LOG=debug vagrant up --provider=lxc would give me some more insight about how things are being configured over there. Would you mind trying this again with 0.4.0 and a V3 box?

@adam-stokes
Copy link
Contributor

I'm not sure about that but a gist with the output of VAGRANT_LOG=debug vagrant up --provider=lxc would give me some more insight about how things are being configured over there. Would you mind trying this again with 0.4.0 and a V3 box?

Here is the output with the latest raring v3 box and 0.4.0

http://paste.ubuntu.com/5889411/

I manually ran the lxc-info to see

-> % sudo lxc-info --name vagrant-sbuild-1374197376
state:   STOPPED
lxc-info: 'vagrant-sbuild-1374197376' is not running
pid:        -1

but when I run lxc-start against that id it works just fine

@fgrehm
Copy link
Owner

fgrehm commented Jul 19, 2013

@battlemidget tks! so if you look at the line 212 you'll see the command that vagrant-lxc is running to start the container. if you were able to start it with lxc-start without the extra parameters I'm almost sure that the problem is related to shared folders or your lxc customizations. would you mind tracking it down? Just make sure you don't use the -d parameter in order to start the container on the foreground

@adam-stokes
Copy link
Contributor

  config.vm.provider :lxc do |lxc|
    lxc.customize 'cgroup.memory.limit_in_bytes', '1024'
  end

Sooo that was the offending part and can you spot the issue? :) (hint: it was a particular letter that needed to go at the end of 1024)

So everything works now thanks for the fresh set of eyes

@fgrehm
Copy link
Owner

fgrehm commented Jul 19, 2013

@battlemidget 🎆 glad to help :D

@fgrehm
Copy link
Owner

fgrehm commented Jul 21, 2013

hey @sickill, yesterday I came across this comment on the GH lxc repo and it seems that lxc is broken on F18 and F19 hosts =/

@ku1ik
Copy link
Author

ku1ik commented Jul 21, 2013

@fgrehm oh, this looks bad :/

@jalberto
Copy link

I'm trying it in Fedora 19, with vagrant 1.3.1 + lxc 0.9 + vagrant-lxc 0.6.0 + kernel 3.10.11.

One of the problems is about lxcbr0, it is an ubuntu thing, is not created by default in other distros. One possible solution is to check for it, and auto create it if not found.

@fgrehm
Copy link
Owner

fgrehm commented Sep 17, 2013

@jalberto hum... are you able to manually create and boot containers without vagrant-lxc? Last time I tried it didn't work out =/
regarding lxcbr0, at some point we might drop that requirement but for now it is needed. If things are working fine outside of vagrant-lxc we can work out the lxcbr0 bit easily I think ;)

@jalberto
Copy link

@fgrehm

[root@olive ~]# lxc-info --name healandgo_default-1379435452
state: RUNNING
pid: 1605

I can access with lxc-console but I cannot connect to it using ssh, I think because lxcbr0 need to have an specific address

This howto have some clues: http://blog.bodhizazen.net/linux/lxc-linux-containers/

@fgrehm
Copy link
Owner

fgrehm commented Sep 17, 2013

@jalberto awesome! I'm not familiar with Fedora so would you be able to create a gist with the steps required to get to the point where you are? Even better would be if you could set things up on a Vagrant VirtualBox VM so that I can reproduce the steps from here and we can better collaborate to make this work ;)

@jalberto
Copy link

@fgrehm with this Vagrantfile you will get a environment similar to mine :)

VAGRANTFILE_API_VERSION = "2"

$script = <<SCRIPT
echo "* yum update"
yum -y update --skip-broken

echo "* yum install latest lxc"
yum -y --enablerepo=updates-testing install lxc lxc-extra

echo "* add lxcbr0"
yum -y install bridge-utils
brctl addbr lxcbr0
SCRIPT

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "fedora19x64"
  config.vm.box_url = "https://dl.dropboxusercontent.com/u/86066173/fedora-19.box"

  config.vm.provision "shell", inline: $script
end

@fgrehm
Copy link
Owner

fgrehm commented Sep 19, 2013

@jalberto awesome! Just to double check, have you seen this wiki page on setting things up for Debian? I think that if you are able to replicate a NAT setup similar to Ubuntu's one you should be good to go too :) Just LMK how it goes so we can create a new Wiki page for Fedora ;)

We might automatically do these configs at some point, but I'm not looking into it before I'm able to deal with hashicorp/vagrant#2005

@jalberto
Copy link

@fgrehm This is a init file modified to work with fedora, just small changes, but interfaces are created. But I yet cannot connect by ssh

#!/bin/sh

### BEGIN INIT INFO
# Provides:             lxc-net
# Required-Start:       $syslog $remote_fs lxc
# Required-Stop:        $syslog $remote_fs lxc
# Should-Start:
# Should-Stop:
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    Linux Containers Network Configuration
# Description:          Linux Containers Network Configuration
# X-Start-Before:
# X-Stop-After:
# X-Interactive:        true
### END INIT INFO

# Taken from ubuntu's lxc-net upstart config and adopted to init script
# original author: Serge Hallyn <serge.hallyn@canonical.com>

USE_LXC_BRIDGE="false"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
LXC_DHCP_CONFILE=""
varrun="/var/run/lxc"
LXC_DOMAIN=""

. /lib/lsb/init-functions

start() {
    [ -f /etc/default/lxc ] && . /etc/default/lxc

    [ "x$USE_LXC_BRIDGE" = "xtrue" ] || { exit 0; }
    echo $"* LXC bridge = true"

    if [ -d /sys/class/net/${LXC_BRIDGE} ]; then
        if [ ! -f ${varrun}/network_up ]; then
            echo $"bridge exists, but we didn't start it"
            exit 0;
        fi
        exit 0;
    fi

    cleanup() {
        echo $"dnsmasq failed to start, clean up the bridge"
        iptables -t nat -D POSTROUTING -s ${LXC_NETWORK} ! -d ${LXC_NETWORK} -j MASQUERADE || true
        ifconfig ${LXC_BRIDGE} down || true
        brctl delbr ${LXC_BRIDGE} || true
    }

    echo $"set up the lxc network"
    brctl addbr ${LXC_BRIDGE} || { echo "Missing bridge support in kernel"; exit 0; }
    echo 1 > /proc/sys/net/ipv4/ip_forward
    mkdir -p ${varrun}
    ifconfig ${LXC_BRIDGE} ${LXC_ADDR} netmask ${LXC_NETMASK} up
    iptables -t nat -A POSTROUTING -s ${LXC_NETWORK} ! -d ${LXC_NETWORK} -j MASQUERADE

    LXC_DOMAIN_ARG=""
    if [ -n "$LXC_DOMAIN" ]; then
        LXC_DOMAIN_ARG="-s $LXC_DOMAIN"
    fi
    dnsmasq $LXC_DOMAIN_ARG --strict-order --bind-interfaces --pid-file=${varrun}/dnsmasq.pid --conf-file=${LXC_DHCP_CONFILE} --listen-address ${LXC_ADDR} --dhcp-range ${LXC_DHCP_RANGE} --dhcp-lease-max=${LXC_DHCP_MAX} --dhcp-no-override --except-interface=lo --interface=${LXC_BRIDGE} --dhcp-leasefile=/var/lib/misc/dnsmasq.${LXC_BRIDGE}.leases --dhcp-authoritative || cleanup
    touch ${varrun}/network_up
}

stop() {
    [ -f /etc/default/lxc ] && . /etc/default/lxc
    [ -f "${varrun}/network_up" ] || exit 0;
    # if $LXC_BRIDGE has attached interfaces, don't shut it down
    ls /sys/class/net/${LXC_BRIDGE}/brif/* > /dev/null 2>&1 && exit 0;

    if [ -d /sys/class/net/${LXC_BRIDGE} ]; then
        ifconfig ${LXC_BRIDGE} down
        iptables -t nat -D POSTROUTING -s ${LXC_NETWORK} ! -d ${LXC_NETWORK} -j MASQUERADE || true
        pid=`cat ${varrun}/dnsmasq.pid 2>/dev/null` && kill -9 $pid || true
        rm -f ${varrun}/dnsmasq.pid
        brctl delbr ${LXC_BRIDGE}
    fi
    rm -f ${varrun}/network_up
}

case "${1}" in
    start)
        echo $"Starting Linux Containers"

        start
        ;;

    stop)
        echo $"Stopping Linux Containers"

        stop
        ;;

    restart|force-reload)
        echo $"Restarting Linux Containers"

        stop
        start
        ;;
esac

@jalberto
Copy link

Using static IP config I can login with ssh:

config.vm.provider :lxc do |lxc|
lxc.customize 'network.ipv4', '10.0.3.15/24'
end

but the container don't have internet connection

@jalberto
Copy link

great news: http://blog.docker.io/2013/09/red-hat-and-docker-collaborate/

I think is a good strategy to use libvirt for network things

@fgrehm
Copy link
Owner

fgrehm commented Sep 27, 2013

@jalberto someone else suggested using libvirt for networking before but I have no idea how that would look like. would you mind sharing your thoughts on #120 and #119?
regarding usage on fedora, I'll try to reproduce your setup down here as soon as possible and will report back once I'm able to do that :)

@fgrehm
Copy link
Owner

fgrehm commented Oct 29, 2013

I think we can close this in favor of GH-171 and GH-166 :)

@fgrehm fgrehm closed this as completed Oct 29, 2013
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants