Docker Cluster On Raspberry Pi Zeros

How do you really know how it works if you do not work it? Serverless? Clusters? Linux? The most skilled people in technology are well practiced. What if you want to push yourself to the next level? Maybe you want to experiment with some things that are a little sketchy and you dont want it escaping to the Internet. Maybe you want to test that security thing you saw on that thing that time.

The answer is to setup a lab. Something that can have a network and services but be completely disconnected from your home network and especially be disconnected and air gaped from the Internet.

There are several routs one can go to set up a lab. To have several machines they can be virtual or hardware. I did not plan for this project so the hardware budget is non-excitant. I don’t have any unused hardware laying around that can run VMs. So lets setup a CLUSTER OF RASPBERRY PIS! enter…

Cluster Hat SugarPi

… OK, yes, there are plenty of tutorials out there on how to do this, but mine is somewhat special because it is using 8086’s ClusterHat to host four Raspberry Pi Zeros in USB Gadget mode on a Raspberry Pi 3B, powered by a PiSugar2 Pro. In theory it could serve up a WiFi access point to a little portable network. Although, I am not sure that would run very long, but it is still cool to be able to have the option to just plug it into it.

Bowl of pumpkin pie toped with ice-cream, wiped cream and butterscotch syrup
VS
Cluster Hat SugarPi

Configuring the ClusterHat from scratch isn’t overly complex, but I am going to go ahead and use the images furnished as a starting point. The basic setup will give us a sequestered network that can be airgaped. When plugged in, it is NATed so we can do updates and downloads. On top of this private network of tiny computers we will install Docker Swarm for some cluster fun. This post is organized into three parts:

The Mix

To build my cluster, I started with a Raspberry Pi 3B+. (if you decide to use a Raspberry Pi 4B consider the Pimoroni Fan Shim to keep it cool) To it’s base I added a PiSugar 2 Pro for portable power. This highlights the lower power consumption of the Raspberry Pi 3 compared to the 4 as well. Then I followed the instructions to install a Cluster Hat. I was happy to find out that these two pieces of hardware do not use conflicting I2C addresses. To the Cluster Hat I plugged in four Raspberry Pi Zero W (note leaving the headers off make for a better fit). Through out this tutorial I will refer to these Raspberry Pi Zeros as zero or p or Zero P Hosts. In the examples I will often refer to the #2 Zero P Host in the examples to make sure you can see that it is a number and not the letter “I” since they tend to look the same and there is plenty of ‘pi’ references as it is the default user. To top it all off I added 5 Samsung 32GB SD cards and ran the USB cable from the Pi3B USB A ports, under the hat to the hat’s power supply.

Now Bake:

I followed the Intermediate instructions on the Cluster Hat site using the standard method to flash the CNAT full image on the SD card for the controller. Then I use the P1-4 lite image on the Pi Zero SD cards. Note that you can boot them without SD cards, but we will keep it simple for now. I also touched the ssh file to enable it on each of the SD cards.

On the P1-4 images, open the command.txt and add the following after rootwait quiet:

ipv6.disable=1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Note that it is not required to disable IPv6 but if you are using NAT like I am it is ok.

You can also open config.txt and uncomment the last line or add the following to enable gadget mode:

dtoverlay=dwc2

If you started from scratch with Raspberry Pi OS you may also need to add the following to the end of the line in cmdline.txt:

modules-load=dwc2

Next I put in the SD cards to their respective places and plugged in an ethernet cable to the controller Pi (the 3B). Then I connected via ssh and completed the standard Pi setup on the controller, being sure to change the pi user password. You can use sudo raspi-config or passwd or however you are comfortable.

I then ran clusterctrl on then clusterctrl off a couple times just for the squee factor of watching the LEDS chain on. With the cluster on, I did a quick lsusb -l to make sure they are attached and look at ifconfig to see they are connected to the host.

Lets do some DNS. On the controller Execute sudo nano /etc/hosts and add the following to the end of the file:

172.19.181.1 p1
172.19.181.2 p2
172.19.181.3 p3
172.19.181.4 p4

Then we will want to copy our ssh key to each zero to make setup easy:

ssh-keygen 
ssh-copy-id -i .ssh/id_rsa.pub p1
ssh-copy-id -i .ssh/id_rsa.pub p2
ssh-copy-id -i .ssh/id_rsa.pub p3
ssh-copy-id -i .ssh/id_rsa.pub p4

Docker on Controller

While on the cluster controller install docker. You should note that the convince script does use apt-get so you don’t have unmanaged binaries lying around.

curl -sSL get.docker.com | sh 

Then you can initialize the Docker Swarm:

sudo docker swarm init --advertise-addr 172.19.181.254

In the output from init there is a line that starts with ‘docker swarm join‘. That is your command to join the swarm, you might want to copy it to your notes so you can copy and paste later.

Now issue the following command to see that the Status is ‘Ready’ and Availability is ‘Active’. Also notice that Manager Status is ‘Leader’.

sudo docker node ls

The the first Pi Zero or P1 (Pee-one) will be a manager, to get that join command we ask docker for it with:

sudo docker swarm join-token manager

Copy that out and save it in your notes.

Connecting to the zeros is as easy as ssh pi@p2. Alternatively, if you want to setup over serial you can sudo minicom p2 since aliases are nicely setup for you if you use the standard p1-4 images. It may not be pretty over serial, but it gets the job done.

password changed yo!

After finishing the configuration and rebooting each Pi Zero I want to make sure to get the system up to date by typing:

sudo apt-get update && sudo apt-get -y upgrade

Once this is done on each zero, you will want to disable swap.

sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall 
sudo update-rc.d dphys-swapfile remove

On each zero you will want to add a hosts entry for the controller, generate a ssh key and copy it back to the controller, do the following for each zero (p1-4):

echo "172.19.181.254 cnat" | sudo tee -a  /etc/hosts && ssh-keygen && ssh-copy-id -i .ssh/id_rsa.pub cnat

This will generate a ssh key and copy it back to the controller after asking you some stuff, like login.

From the now live Pi Zero Hosts

Now lets do the docker thing to the Zeros using the convenience command. On P1 (Pee-one) use the docker join command you copied in the last section to join it as a manager. You will see “This node joined a swam as a manager.” You can issue a docker node list to see the two nodes.

Now on P2-4 use the first join command from docker init to join them to the swarm as workers. After you issue the command it should output “This node joined a swarm as a worker.” (notice the Docker team apparently likes their punctuation.) When you get done list the nodes, and you should see the following:

ok, so I changed the host names to zero1-4 when they are p1-4 in hosts

Then lets add a nice tool to visualize what is going on. On the Cluster Controller issue the following:

sudo docker service create \
--name viz \
--publish 8080:8080/tcp \
--constraint node.role==manager \
--mount \
type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock alexellis2/visualizer-arm:latest

Then you can see that it is running on one host by listing the docker services:

sudo docker service list

Now you can hit the public cluster network address on port 8080. I registered mine in my local DNS under the name cluster-hat so I get there with http://cluster-hat:8080/

Eat

So now, for a quick demonstration. While watching the vis service, install a ping service and scale it up. To install the service use:

sudo docker service create --replicas 5 --name party alpine ping myhacker.party

When you hit enter it will start 5 services pinging the myhacker.party domain. You may see them bounce between the different servers.

Then scale it down:

sudo docker service scale party=1

Then remove the service with:

sudo docker service rm party

Ruminate

The ClusterHat is very cool. After I backup images of everything, I will have a great restorable lab environment. Setting up Docker is easy enough and seems to be the only well documented clustering solution that scales down to a Pi Zero. Tell me if you know of something better. With the large selection of tooling and runnable apps, this seems like a fun way to experiment.

So why did I not use Kubernetes? If you were to count words on the internet Kubernetes is only rivaled by the word ‘the’. However support for the particular processor on the Raspberry Pi Zero was dropped when an compiler switch was introduced to compile Kubernetes more securly. According to GitHub the last version to work on the Zero was version 1.5.6. So to run any newer version meant a custom kernel and a custom compile of Kubernetes itself. So k8s will be something I will land on my RPi 4 Cluster or my Cluster Triple (coming soon).

Azure this is not, but I can unplug it from all networks and play with networked things without leaking onto the Internet. With the WIFI modules I could use this battery powered cluster as a multi-headed pwnagotchi. The fun and learning is endless.

Docker Cluster On Raspberry Pi Zeros

The Cargo Mask

So as you are well aware, a pandemic made everyone think they are remote professionals or professional streamers. I was feeling like everyone was infringing on my turf. But then I felt a little left out of the ‘new’ part of ‘new normal’. So I decided, in my way of following the herd, I would diverge from my remote software building and add a mask design to the pile of designs out there.

I don’t know the first thing about sewing. Plus, I am told there are standards for patterns and a bunch of best practices. For now I am going to take the hacker approach and slap this together with notes and drawings. Feel free to point out where I missed these things and I promise to get good if this becomes my day job. Stranger things have happened.

made

I hope you appreciate the end product. I tend to imitate, customize and then keep banging out details till I am satisfied. I will try to give credit wherever I can. I learned from all of you after all. At the time of this post, I am really at the beginning of my process, that actually starts in the middle.

Continue reading “The Cargo Mask”
The Cargo Mask

When consultants fail

There seems to never be enough time in the day. In small shops like mine (I am the only programmer), we often seek additional muscle via contractors. One test my employer has tried is having the contractor come in on an NDA to look at a project and give us observations and suggestions. We pay them for it, so even if we decide not to use them, everyone wins. We get fresh eyes, and they get paid.

Sometimes we get breakthrough results, and sometimes we get perplexing responses. Through this process, I have collected examples of bad suggestions. When technical consulting, one should be cautious about communicating clearly, especially when they are on their own and don’t have a second person to validate them. Let’s have some fun breaking down one suggestion that fell completely flat. Continue reading “When consultants fail”

When consultants fail

Deployment Driven Micro-Modularity

In a previous post I mentioned that I am styling my current work in ‘micro-modularity’. You may be thinking I just made that term up. I did. The reason I use this expression is that it sums up a portion of my programming style.

Like an indie band, my work tends to be inspired by many different patterns, styles and methodologies.  I don’t fully embrace a genera like DDD, TDD, UXDD, XP, Scrum, Hack, Portal, SOA, Microservices…  Instead I use the ‘tool that fits the job’ (TTF if you want an acronym or maybe TTFDD).

The point is that because of the increased accessibility of tooling and increase in Internet bandwidth, one of the most significant changes in software development in the last 10 years has been how deployment happens. This should change how we work. Continue reading “Deployment Driven Micro-Modularity”

Deployment Driven Micro-Modularity

Release Management Using VSTS

If you have been tracking Azure at all lately you know that it is growing at breakneck pace. Microsoft has really turned up the volume on their enterprise cloud at all levels.  Just diving in can sometimes be a rough experience. Azure is a wide open field with many paths to solve your problem. I would like to show you the path I have found to get my release management up and running for my complex micro-modularity and microservices.

In the last post we created an ASP.NET Project for Visual Studio Team Services (VSTS) Release in a minimal way.  Now we will check ‘the check box of continuous integration’ and ‘push the button of continuous deployment’. Then we will add a second deployment environment to get your creative juices flowing. Continue reading “Release Management Using VSTS”

Release Management Using VSTS

ASP.NET Project for VSTS Release

So you have discovered the intense desire to manage your infrastructure as code and continuously deploy with your eyes closed. It is true, continuous integration and continuous deployment, once implemented, open a whole new set of opportunities. For me personally, most attractive of these is micro-functionality. Microservices are all the rage, but on a more fundamental level, it is hard to manage complex modularity without CI and CD.

For this reason I am going to assemble a refrence ASP.NET project that will demonstrate a common sinario in which the developer and Visual Studio are removed from the deployment process and endless deployments are made possible through Azure Resource Group template parameters. In a second post I will walk through setting up the Azure side of things. Continue reading “ASP.NET Project for VSTS Release”

ASP.NET Project for VSTS Release

VSTS Release “no vms found in resource group”

I have spent a little time trying to figure out why my deployment using an all-but-stock Azure Website ARM template was failing with the error “No VMs Found in Resource Group”.  You see, I am trying to set up my Visual Studio Team Services (VSTS) Release environments. Currently the ‘Release’ extension for VSTS is in preview. As I have learned and come to accept, when things are in preview it usually means the documentation isn’t complete and the workflow may not entirely make sense.

Occasionally I have issues that there are no answers for via searching or asking on Stack Overflow. I will try to record some of those here.

One thing I recently discovered on the Azure Resource Group Deployment task is this funny little box called Output that has an input box labeled ‘Resource Group’.  When you hover over the information icon it tells you it is simply for you to provide a name for the variable for the resource group for later use in a task.  Being a forward thinker I gave it a name since it didn’t cost me more than a second or two. Later maybe I could use it… or it would cause me issues. Ok, you most likely figured out that this caused me the error “No VMs Found in Resource Group”.  I searched all over and eventually found an issue on Github for it. Also, now that I think of it, it doesn’t seem to make sense that this would be output to a variable anyway since it should be a variable from the start. so:

SOLVED: VSTS – Release – Azure Resource Group Deployment – “No VMs Found in Resource Group”

There seems to be a bug when you fill in the Output > ‘Resource Group’ field and you are not using any VMs. Simply clear this field. If you need to I would suggest placing the name in an environment variable and using that is input from the start. Then it will also be available in other tasks.

VSTS Release “no vms found in resource group”

Test & Release Control

“Our contractor just gave us a merge request for X, Y and Z. We need to release feature X and Z. Feature Y failed testing and it is holding up the rest from being released!”

Have you been in this situation before?  You are responsible for a release that has several features, and one of them did not pass UAT or other testing. The business wants to release the other features because fixing the one will take too long, or it has been aborted completely. You may be using a version control system (VCS). Even so, you may have found that trying to pluck the feature out for release made a big mess. Continue reading “Test & Release Control”

Test & Release Control

Implementing IQueryable on a Business Logic Layer – Part 2

In Part 1 of this series I explained what sort of situation I am building towards.  In short I am building a highly modularized scaffolding system geared toward a microservice architecture.

Central to the functionality is a data object decorator and a gateway class that completely abstract away Entity Framework.  It was trivial to implement Take() and Skip(), passing the values to an internal IQueryable from EF.  Then I implemented ToList() to do a select as decorator.  As I mentioned in the previous post, that is great for doing linq against the gateway, but you cant cast it to a IQueryable as is expected in various places like WebApi. Continue reading “Implementing IQueryable on a Business Logic Layer – Part 2”

Implementing IQueryable on a Business Logic Layer – Part 2

Implementing IQueryable on a Business Logic Layer – Part 1

So you might be thinking this is a bad approach from the start.  That this sort of functionality belongs on the DAL or at least in a Repository.  However, in this age of Microservices and in the context of complex applications, Business logic will exist in several layers.  Think of it in MVVM or N-Tier style where there are complex validations and business logic that runs faster when closer to the DAL in a multi-tier environment. In this particular instance I am exposing this sort of module via oData and as an Actor via. Azure Service Fabric.

Getting the internals right is particularly important for me because I am using extensive code generation via T4. If I get it right, it quickly extends to my whole infrastructure of microservices.

Early on I thought I could implement just parts of IQueryable and be able to get away with it. I tried only implementing Where(), Skip() and Take() without the host of classes needed for an actual IQueryable.  This worked great when my code is loosely coupled to the implementation and blindly executes only these operations.

The catch is that I couldn’t just cast a partial implementation to IQueryable for things like WebApi to use. It would be great to just implement these three operations and have some sort of generic implementation of a decorator that bridges the implementation to an IQueryable. Alas, there is no such bridge in native .NET.  Thus, we must help ourselves.

Poking around the web you will find an ancient Microsoft secret Walkthrough: Creating an IQueryable LINQ Provider.  Many posts about this subject fail to address the fact that you may not be using any sort of IQueryable under the hood. The MSDN post shows you how without directly addressing it directly.

At a high level: during the Execute() phase you will need to figure out what you can pass on, do so, and then execute the underlying query to return your list.  This list then becomes the subject of the remainder of the query.

The following post will walk through my implementation thought process.

Implementing IQueryable on a Business Logic Layer – Part 1