Running a Kubernetes Cluster with Raspberry Pi - Part 2

Back to Listing

Nick Miller

Hanover, MD, 29 November 2018

Nick Miller has been a Linux enthusiast since high school. His day job focuses on Puppet, so he is new to Kubernetes and Ansible.

In the last blog, I went over the hardware basics of my Raspberry Pi cluster. I ended up using Ansible to do the configuration of my nodes over Puppet, because I didn’t want to install the Puppet agent. There are other tools, like rak8s, that use Ansible to prepare hardware nodes for Kubernetes. That wasn’t the point of my time on this project, so I did it by myself.

All of my tooling for this project is located in my GitHub repository: jeefberkey/pi-images. This project contains a Rakefile, as well as all of the provisioning, Ansible, and Kubernetes code required.


I started using Ansible as a way to run command line tools on all my nodes at once, like this:

ansible all -m shell -a "apt update"

This is a good way to lose track of what you’ve done in your shell history. Plus, you lose a lot of the readability provided by the yaml structure in simple Ansible.

My first playbook looked like this:

- hosts: all
   - apt:
       update_cache: yes
       upgrade: yes
   - apt: name= state=present
     - vim
     - git
     - htop

I essentially copied the examples from the Ansible apt module documentation. I ended up making playbooks to manage updates, install the right version of Docker, set up my NFS server, disable cloud-init, and set up the system for kubeadm. You can find these playbooks in my GitHub repo for the project under the playbooks/ directory.

I know there is a way to connect these playbooks, but I haven’t learned how to do that yet. For now, I just run them each by hand:

ansible-playbook -b playbooks/update.yaml

Sometimes I forget which order to run them in. I will cover some more of my playbooks in the next section.

Bootstrapping Kubernetes

There are officially around 70 different Kubernetes distributions and a handful of solutions for getting a cluster going from the official Kubernetes team. There is even one built specific for my use case, but I decided not to use it in order to learn more about Ansible.

I settled on using kubeadm. It has just enough dependencies to make prepping my systems easy, but I don’t have to worry about rolling my own certificate authority or manually curling a bunch of binaries. It’s in the sweet spot for a non-production hobby cluster.

All of the Pis needed a few things set up before I could get started with Kubernetes:

  • Add the Kubernetes repo and install kubelet, kubeadm, and kubectl
  • Load the br_netfilter kernel module
  • Disable swap
  • Set sysctl net.bridge.bridge-nf-call-iptables to 1
  • Set sysctl net.ipv4.ip_forward to 1

I’ve got a playbook that fills out the following Jinja2 (The ansible templating engine) template, used for configuring kubeadm:

kind: MasterConfiguration
  bindPort: 443
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""

Jinja2 is not even close as easy to use as ERB(The ruby templating engine), which is what I’m used to and decided to use in the rest of this project. The most important piece of configuration here is the podSubnet value. Most networking plugins require this value to be set. is the value provided by the kubeadm documentation. The auditPolicy section was an attempt to limit the amount of writes to my SD card, even though I have never followed up and verified that it actually did anything useful. Most all of the other settings are detected at runtime by the kubeadm init process.

After copying over that template, run the following on one node:

kubeadm init --config /etc/kubeadm.config.yaml

This is going to take a while on Pis, about 5 minutes to finish. At the end, it should spit out some useful information like where to find the admin kubeconfig (file containing certs and API endpoints needed for kubectl) and the join command to get the rest of your nodes on the cluster.

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):\$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options
listed at:

You can now join any number of machines by running the following on
each node as root:

    kubeadm join --token ...
      --discovery-token-ca-cert-hash sha256:...

For my cluster, I (embarrassingly) just copy that join command into a playbook and run it on every node except the master.

A cluster at this point will look something like this:

kube-system coredns-78fcdf6894-d92gk       1/1   Pending 0        15m
kube-system coredns-78fcdf6894-m52lz       1/1   Pending 0        15m
kube-system etcd-raspi1                    1/1   Running 0        15m
kube-system kube-apiserver-raspi1          1/1   Running 0        15m
kube-system kube-controller-manager-raspi1 1/1   Running 0        15m
kube-system kube-proxy-5rpj2               1/1   Running 0        15m
kube-system kube-proxy-g4dlm               1/1   Running 0        15m
kube-system kube-proxy-mnfzj               1/1   Running 0        15m
kube-system kube-proxy-qpbgf               1/1   Running 0        15m
kube-system kube-proxy-wsbl7               1/1   Running 0        15m
kube-system kube-scheduler-raspi1          1/1   Running 0        15m

This output is generated by running kubectl get pods --all-namespaces. The coredns pod will be pending until the network backend has been added.

I know my Ansible is fairly rough, and if there was any other person working on this with me, we would have already hit a ton of issues. I’m just treating Ansible as a force multiplier, a tool that just lets me run commands on all my nodes at the same time. I’m certain there are ways to avoid modifying my playbooks manually to paste in the join command, and I’m certain there are ways to make sure all my playbooks are run in order and idempotent. What is important, however, is that I did learn enough Ansible to get to the point where I can start learning Kubernetes!

Previous Post

Nick is a automation engineer and consultant at Onyx Point, Inc. In addition to providing on-site and remote professional services to our customers, he also works as a developer on SimpLE.

At Onyx Point, our engineers focus on Security, System Administration, Automation, Dataflow, and DevOps consulting for government and commercial clients. We offer professional services for Puppet, RedHat, SIMP, NiFi, GitLab, and the other solutions in place that keep your systems running securely and efficiently. We offer Open Source Software support and Engineering and Consulting services through GSA IT Schedule 70. As Open Source contributors and advocates, we encourage the use of FOSS products in Government as part of an overarching IT Efficiencies plan to reduce ongoing IT expenditures attributed to software licensing. Our support and contributions to Open Source, are just one of our many guiding principles

  • Customer First.
  • Security in All We Do.
  • Pursue Innovation with Integrity.
  • Communicate Openly and Respectfully.
  • Offer Your Talents, and Appreciate the Talents of Others

ansible, kubernetes, arm, raspberry pi, hypriot

Share this story

We work with these Technologies + Partners