Skip to content

Deploying Kubeflow on microk8s

Kenneth Koski edited this page Jan 17, 2019 · 4 revisions

Note: These instructions are being kept up-to-date over here:

https://github.com/juju-solutions/bundle-kubeflow/blob/master/README.md

These instructions will detail how to deploy Kubeflow on microk8s.

This is a condensed version of the excellent instructions found here:

https://discourse.jujucharms.com/t/juju-kubernetes-and-microk8s/226

Run these commands to start up Kubeflow locally:

# Requires at least juju 2.5rc1
sudo snap install juju --beta --classic
sudo snap install microk8s --edge --classic

# Set up juju and microk8s to play nicely together
sudo microk8s.enable dns storage
juju bootstrap lxd
microk8s.config | juju add-k8s k8stest
juju add-model test k8stest
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath

# Deploy kubeflow to microk8s with juju
juju deploy cs:~juju/kubeflow

# Make jupyterhub available on port 8081
microk8s.kubectl port-forward -n test $(microk8s.kubectl -n test get pods -l juju-application=kubeflow-tf-hub --no-headers -o custom-columns=":metadata.name") 8081:8000

You can now go to the Jupyterhub page at http://localhost:8081/ and log in with any username and password to spawn a new Jupyter instance. In the new Jupyter instance, you can run a script that uses TensorFlow, such as this one:

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
 
import tensorflow as tf
 
x = tf.placeholder(tf.float32, [None, 784])
 
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
 
y = tf.nn.softmax(tf.matmul(x, W) + b)
 
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
 
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
 
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
 
for _ in range(1000):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
 
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

When you are done, you can clean up the created resources with these commands:

microk8s.kubectl delete ns test
juju kill-controller localhost-localhost -y -t0
juju remove-cloud k8stest

For an all-in-one script that deploys Kubeflow and cleans up after the user is done, run this:

#!/usr/bin/env bash

NAMESPACE=test
CLOUD=k8stest

cleanup() {
  # Clean up resources
  microk8s.kubectl delete ns $NAMESPACE
  juju kill-controller localhost-localhost -y -t0
  juju remove-cloud $CLOUD
}

trap cleanup EXIT

set -eux

# Set up juju and microk8s to play nicely together
sudo microk8s.enable dns storage
juju bootstrap lxd
microk8s.config | juju add-k8s $CLOUD
juju add-model $NAMESPACE $CLOUD
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath

# Deploy kubeflow to microk8s
juju deploy cs:~juju/kubeflow

# Exposes the dashboard at http://localhost:8081/
# When you're done, ctrl+c will exit this script and free the created resources
TFHUB=$(microk8s.kubectl -n $NAMESPACE get pods -l juju-application=kubeflow-tf-hub --no-headers -o custom-columns=":metadata.name")
microk8s.kubectl port-forward -n $NAMESPACE $TFHUB 8081:8000