WARNING! This document hasn’t been fully proof-read for mistakes in the text or the commands. This is also the very first time I’ve dove this deep into Docker. My Docker experience can be barely measured in days if not more accurately measured in hours. If you have issues, know of better ways, or see mistakes, please let me know and I’ll update this post.
Note: You can pretty much skip reading all this and just copy-paste the commands, if you already understand them and adapt them to your network layout, but there is a step missing because it is highly specific to your own network topology, which is where to connect the veth device that gets created.
Yes, you read that correctly. Some of these MikroTik routers are quite powerful, and run on the same kind of arm CPU found in everything from your Synology NAS to what Amazon uses to power their EC2 instances.
The setup is pretty simple, assuming you already have Containers enabled in RouterOS, and some sort of local storage to install and run the containers on.
I’m going to assume you already have cursory knowledge on how to use RouterOS and network, and simply list out what settings you’ll need to change to get this working.
You’ll need to create an environment variable to set your timezone in Envs. I’ll use mine as an example:
name="envs_homeassistant"
to set the name for us to reference when setting up the container, key="TZ"
to tell the container which env variable we’re trying to set, and value="America/Toronto"
to set our (or in this case, my) timezone.
To do this you need to use the add
command in /containers/envs
(or the equivalent place in the GUI if you prefer).
So the full command would be something like:
/container/envs
add name="envs_homeassistant" key="TZ" value="America/Toronto"
What this does:
The first line tells the router you want to modify envs values in the containers function that the containers can use.
The second line tells the router to set a new environment variable, called “envs_homeassistant” (or whatever name you prefer), and that environment variable will be for the key called “TZ” (This is the key Home Assistant looks for, for timezone information, so you have to have it exact for it to work), with the value of “America/Toronto” (which again, needs to be exact and for your timezone, which you can look up on this Wikipeida page here)
Next we need to set the mounts. This is where your personal information from the container will get stored. It gets stored separately so you can erase, rewrite or update the container without losing your personal information.
So in my case I had name="homeassistant"
as an easy way to reference it when we set up the container, src="/nvme1/containers/mounts/homeassistant"
which is where I wanted to store my personal data separate from the container data on my router, and dst="/config"
which is the folder in the container where your personal data gets stored. For name
and src
, these can both be whatever/wherever you want. The name
section is purely for your reference when setting up the container, the src
is just the path you want your personal data to get stored in.
Caution: Take note, storing your personal data, logs, or containers in "/"
is not a good idea because this is usually the very small amount of flash memory (just a few MB) that your router uses to store firmware and your basic configuration settings. This memory is slow, is very little, and wears out quickly. Once it’s shot, your router will no longer boot. So make sure you put all your data in an separate drive, which in my case was "/nvme1"
. Whatever folder structure you choose, is entirely up to you!
So the full command would be something like:
/container/mounts
add name="homeassistant" src="/nvme1/containers/mounts/homeassistant" dst="/config"
What this does:
As before, the first line tells the router you want to modify the list of mounts that the containers can use.
The second line tells the router that you want to create a mount called “homeassistant” that will get stored in the folder “mounts” which is contained in the folder “containers” that is on the drive folder called “nvme1”. Your drive folder name is set when you set up your internal storage, and will show up as a folder inside of “/” which is your built-in flash memory (that we should never use). The subsequent folders are just ones I made because I like that structure. If you wanted you could store it all in “/name-of-drive/containerzzzz/TheMoonApple/BeAnS-mAkE-GrEaT-PeTs” or just directly in your drive under “/name-of-drive/” if you really wanted. It’s up to you.
Note: This portion is VERY dependent on your personal network configuration, so you will need to figure out that portion on your own.
Now we need to add networking. You can do this a few different ways, but at the very least you’ll need to create a virtual Ethernet device, known as a “veth
”, and configure it as you would any other interface on your MikroTik router. There is more than one way to skin this cat, and how you have your network topology will heavily influence what you do. Personally, I created a bridge, and added the veth along with my other ports to that bridge. This has knock on effects so I won’t be guiding you through this step.
To create a veth, you’ll need to run a command similar to this, but formatted for your network:
/interface/veth
add name="veth1" address=10.10.0.5/24 gateway=10.10.0.1
You will then need to make sure it is properly routeable to the rest of your network.
The virtual Ethernet interface you create, “veth1
” in this case, will be selected as your interface when you create your docker container.
The last step is configuring the actual container. This is where we’ll set where the containers will download from, how much memory they can use, and how they will be configured to run.
We need to start by setting some global settings first, like were to get packages from, how much ram containers can use, and where temp files should get stored while containers are downloading.
Each of these are set with their own command below:
/container/config
set ram-high=2G
set registry-url="https://registry-1.docker.io/"
set tmpdir="nvme1/containers/pull"
These are pretty self-explanatory. The “ram-high
” setting limits how much RAM containers can use before being throttled (As I understand, they can still use more, but their performance will be severely limited). The value of “ram-high
” at “2G
” is setting the soft limit to 2 Gigabytes of RAM, which on my router with 16gb of ram is intentionally a very small portion. You’d want to set this based on your hardware and environment. You can also define a value in Megabytes by using "M"
or Kilobytes by using "K"
.
Caution: Don’t go using up most of your RAM with containers or you’re going to start having performance issues with your internet. I wouldn’t use any more than 1/4th my total RAM for containers (even that may be too much, depending on your device).
Then we need to set up the actual container, set the remote image that we’ll source the container we’re going to run, tell the container manager which Envs and Mounts to use in the container, choose a destination to save the container, as well as adding a command to disable a feature that’s incompatible with the processor in (at least my) MikroTik devices so the container will actually run without crashing right away.
To configure the container, we need to use:
/container
add remote-image="homeassistant/home-assistant:stable" interface="veth1" envlist="envs_homeassistant" cmd="python3 -m homeassistant --config /config" hostname="homeassistant" workdir="/config" root-dir="nvme1/containers/homeassistant" mounts="homeassistant"
What this does:
After switching into container mode in the first line, we add a container and use the “remote-image
” of “homeassistant/home-assistant:stable” as the source. The source is usually broken into 3 parts: the publisher, “homeassistant” in this case, the package, “home-assistant”, and the branch/version, of which we’re using the “stable” branch. the publisher and package are separated by a forward slash “/
”, and the branch is separated from the publisher/package by a “:
”. If you feel like you need more hassle in your life, you could run another branch like the “latest
” and see all the new features in exchange for stability and reliability.
Following that, we have “interface
” where we’ll indicate the veth you set up and configured earlier (not part of this guide), in this case called “veth1
”. Then we have to add our “envlist
” that we set as “envs_homeassistant
”, as well as our special command using “cmd
” to stop the container from crashing by passing this command on to the container: “python3 -m homeassistant --config /config
” which inhibits use of a service that doesn’t work on our processors. As we wrap up, we need to add our working directory, “workdir
” as “/config
” since that’s where we want Home Assistant to store its’ configurationf files, and tell the container where to store itself, “root-dir
”, which should be on your storage drive in a folder like “nvme1
” “drive1
” “usb1
” etc.followed by the path you’ve chosen to store your contiainers in, which was “/containers/homeassistant
” in my case, making the full path “nvme1/containers/homeassistant
”. The finally, what mount we’ve chosen for this container, that we labeled as “homeassistant
”.
That’s it! Unless one of us screwed up, the container should be able to start up, and you should be good to go! Just start the container, and enjoy!