Running a Kubernetes Cluster with Raspberry Pi - Part 1

Back to Listing

Nick Miller

26 September 2018

Nick Miller has been a Linux enthusiast since high school. His day job focuses on Puppet, so he is new to Kubernetes and Ansible.

Kubernetes is what everyone is talking about these days, and what’s the cheapest way to get a physical Kubernetes cluster sitting on your desk? A cluster made of Raspberry Pis! After a total cost of around $350 and too many hours to count, I have a reasonably stable and tiny learning platform.

All of my tooling for this project is located in my GitHub repository: jeefberkey/pi-images. This project contains a Rakefile, as well as all of the provisioning, Ansible, and Kubernetes code required.

Why Pi?

There are a few reasons to use Raspberry Pis to build a cluster, and about as many reasons not to. I mostly used Raspberry Pi 3 B+ for my cluster, but I started with the Pi 3 B.

The good:

  • Cheap - $35 a piece, $50 with the required accessories
  • Tiny - About the size of a credit card, plus there are some really nifty stacking cases
  • Quiet - Zero noise
  • Energy Efficient - The entire cluster is about 10w, excluding networking gear
  • Community - The Raspberry Pi has the same stable software repositories as desktop Debian.

The bad:

  • Slow CPUs
  • 1 GB memory
  • Slow disks - 100 MB/s max from the best MicroSD card
  • Slow ethernet - 100 MB/s max I/O, and it is shared with the USB ports
  • No netboot support - Added in version 3 B+
  • ARM is still annoying and not supported by a lot of neat containerized applications

Overall, the most important thing to me was the price and size, both of which the Pi excels at. My entire cluster needs as much space as a MicroATX computer case, while still being a lot quieter. I had considered other small form factor PCs, but they all ended up being too expensive. I also wanted to try managing many nodes at a time instead of one big Kubernetes node or a bunch of Kubernetes node VMs on the same machine.

To get started with this project, I also had to buy a bunch of other things:

  • 1ft Ethernet cables
  • 1ft Micro USB cables
  • SD Cards (Class 10 - Samsung and Sandisk)
  • Router (Ubiquiti EdgeRouter X)
  • Managed Switch (I found a used one from TP-Link on reddit)
  • USB power supply (2-3W per Pi)

Managing the OS

There are quite a few OSs available for the Raspberry Pi these days. I chose Hypriot because it already included Docker and a few kernel tweaks. It also already had cloud-init preinstalled. Cloud-init made it easy to configure my user and ssh keys. As an added bonus, I also was not not familiar with Debian-based distros and I wanted to give one a try. Some other notable OSs available that I didn’t end up using: Raspbian, Fedora, and CentOS.


Hypriot provides their own tool for flashing the SD cards, flash. I needed to set some kernel command line arguments though so I had to fork and modify it. Flash is more familiar to me than the normal flashing tool,, used by the community. It does a few more operations on the image when it’s flashed, like setting the hostname and adding my user-data.yml for cloud-init.

I have created a Ruby ERB template for the cloud-init files, and use rake provisioning:gen_cloud_init to prepare all of the files required for the provisioning:flash_pi[dev,hostname] task.

# vim: syntax=yaml
hostname: <%= host %>
manage_etc_hosts: true

- name: nick
  gecos: "Nick Miller"
  shell: /bin/bash
  groups: users,docker,video,input
  passwd: fake
  lock_passwd: false
  ssh_pwauth: true
  chpasswd: {expire: true}
  - <%='~/.ssh/')).chomp %>

locale: "en_US.UTF-8"
timezone: "America/New_York"

I did run into some serious issues with cloud-init. I had originally configured cloud-init to do a system update and a reboot. However, I couldn’t see what was going on or how long the cloud-init process was taking. As a result I would try to use Ansible on the systems before they were finished initializing, leading to reboots down the line without an obvious cause. I ended up removing that section of my cloud-init and settled on the code above. Anything else would be better handled by Ansible, where I could see the status of the change I was requesting.

In the next post, I will go over how I learned to use Ansible to manage my cluster.

Nick is a automation engineer and consultant at Onyx Point, Inc. In addition to providing on-site and remote professional services to our customers, he also works as a developer on SimpLE.

At Onyx Point, our engineers focus on Security, System Administration, Automation, Dataflow, and DevOps consulting for government and commercial clients. We offer professional services for Puppet, RedHat, SIMP, NiFi, GitLab, and the other solutions in place that keep your systems running securely and efficiently. We offer Open Source Software support and Engineering and Consulting services through GSA IT Schedule 70. As Open Source contributors and advocates, we encourage the use of FOSS products in Government as part of an overarching IT Efficiencies plan to reduce ongoing IT expenditures attributed to software licensing. Our support and contributions to Open Source, are just one of our many guiding principles

  • Customer First.
  • Security in All We Do.
  • Pursue Innovation with Integrity.
  • Communicate Openly and Respectfully.
  • Offer Your Talents, and Appreciate the Talents of Others

ansible, kubernetes, arm, raspberry pi, hypriot

Share this story

We work with these Technologies + Partners