Skip to content
This repository has been archived by the owner on Jul 24, 2019. It is now read-only.

Developer environment fails to launch as a result of CNI issues within minikube. #192

Closed
intlabs opened this issue Feb 13, 2017 · 4 comments · Fixed by #304
Closed

Developer environment fails to launch as a result of CNI issues within minikube. #192

intlabs opened this issue Feb 13, 2017 · 4 comments · Fixed by #304
Assignees

Comments

@intlabs
Copy link
Contributor

intlabs commented Feb 13, 2017

Is this a bug report or feature request? (choose one): BUG REPORT

Kubernetes Version (output of kubectl version): v1.5.1 and v1.5.2

Helm Client and Tiller Versions (output of helm version): v2.1.3

Development or Deployment Environment?: Development

Release Tag or Master: All

Expected Behavior: Happy developers, rainbows, unicorns etc

What Actually Happened: Minikube blows up.

How to Reproduce the Issue (as minimally as possible):
Follow minikube developer documentation to the letter. (13/Feb/2017)

Any Additional Comments:

There appears to be some bugs in minikube when using CNI networking. In particular there looks to be a race issue with ipset that causes Calico to crash when loading a large number of pods: https://paste.fedoraproject.org/556865/94945148/raw/

As a workaround we can develop without CNI support, as we are not yet making use of network policy:

# Clone the project:
git clone https://github.com/att-comdev/openstack-helm.git && cd openstack-helm

# Get a list of the current tags:
git tag -l

# Checkout the tag you want to work with (if desired, or use master for development):
git checkout 0.1.0

# Start a local Helm Server:
helm serve &
helm repo add local http://localhost:8879/charts

# You may need to change these params for your environment. Look up use of --iso-url if needed:
minikube start \
        --kubernetes-version v1.5.2 \
        --disk-size 40g \
        --memory 16384 \
        --cpus 4 \
        --vm-driver kvm \
        --iso-url=https://storage.googleapis.com/minikube/iso/minikube-v1.0.6.iso

# Initialize Helm/Deploy Tiller:
helm init

# Package the Openstack-Helm Charts, and push them to your local Helm repository:
make

# Label the Minikube as an Openstack Control Plane node:
kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack

# Deploy each chart:
helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack
helm install --name=memcached local/memcached --namespace=openstack
helm install --name=rabbitmq local/rabbitmq --namespace=openstack
helm install --name=keystone local/keystone --namespace=openstack
helm install --name=cinder local/cinder --namespace=openstack
helm install --name=glance local/glance --namespace=openstack
helm install --name=heat local/heat --namespace=openstack
helm install --name=nova local/nova --namespace=openstack
helm install --name=neutron local/neutron --namespace=openstack
helm install --name=horizon local/horizon --namespace=openstack
@fasaxc
Copy link

fasaxc commented Feb 14, 2017

The Calico error is

FailedSystemCall: Failed system call (retcode : 1, args : ('ipset', 'restore'))
  stdout  : 
  stderr  : ipset v6.29: Error in line 1: Kernel error received: set type not supported

  input  : create felix-all-ipam-pools hash:net family inet maxelem 1048576 --exist
create felix-all-ipam-pools-tmp hash:net family inet maxelem 1048576 --exist
flush felix-all-ipam-pools-tmp
add felix-all-ipam-pools-tmp 192.168.0.0/16
swap felix-all-ipam-pools felix-all-ipam-pools-tmp
destroy felix-all-ipam-pools-tmp
COMMIT

It suggests that the relevant ipsets kernel module isn't being loaded.

@fasaxc
Copy link

fasaxc commented Feb 14, 2017

Yes, looks like the v1.0.6 ISO is missing the relevant IP sets kernel modules (but they are present in v1.0.4).

On v1.0.6, the core IP set module is compiled in but all the "type" modules are disabled (so you can't actually create an IP sets). In addition, there is a low limit (256) on the number of IP sets:

$ zcat /proc/config.gz  | grep IP_SET
CONFIG_IP_SET=y
CONFIG_IP_SET_MAX=256
# CONFIG_IP_SET_BITMAP_IP is not set
# CONFIG_IP_SET_BITMAP_IPMAC is not set
# CONFIG_IP_SET_BITMAP_PORT is not set
# CONFIG_IP_SET_HASH_IP is not set
# CONFIG_IP_SET_HASH_IPMARK is not set
# CONFIG_IP_SET_HASH_IPPORT is not set
# CONFIG_IP_SET_HASH_IPPORTIP is not set
# CONFIG_IP_SET_HASH_IPPORTNET is not set
# CONFIG_IP_SET_HASH_MAC is not set
# CONFIG_IP_SET_HASH_NETPORTNET is not set
# CONFIG_IP_SET_HASH_NET is not set
# CONFIG_IP_SET_HASH_NETNET is not set
# CONFIG_IP_SET_HASH_NETPORT is not set
# CONFIG_IP_SET_HASH_NETIFACE is not set
# CONFIG_IP_SET_LIST_SET is not set

@intlabs
Copy link
Contributor Author

intlabs commented Feb 14, 2017

@fasaxc Awesome! Thanks for the update, I'll try and test again against the 1.0.4 iso that we caught this issue with last time: kubernetes/minikube#973.

@v1k0d3n, we should probably keep an eye on this issue: kubernetes/minikube#1110 as it may help us keep sidestep these regressions in future, and let us add other goodies to the iso that we would like to use moving forward.

@v1k0d3n
Copy link
Collaborator

v1k0d3n commented Mar 15, 2017

i think this issue is resolved, and we're having other new issues with minikube and older versions of kube under v1.5.3. going to close out this issue @intlabs in favor of tracking this as a side kubernetes item. i've gotta think of some ways to monitor dependencies on minikube, but i'd like to think that having our own ISO would help with this some (at least this specific problem).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants