My Docker Stack

Configuring what different components? That is kind of outside the scope of the ‘Docker’ aspect of the setup.

I agree that this is outside the scope of Docker, but I am trying to set up a stack much like yours, on my synology NAS. And I have a lot of stability issues. I was hoping that now where you have a functioning stack, I could try and replicate your setup of influxdb, elastic search , postgree. portainer, ha-dockermon.
So What I would like to see is the snippets from you config, where you configure HA interfface to these :slight_smile:

I use ha-dockermon to interface with all the Docker containers. My config is much too large to show all snippets. What exactly are you looking for?

Thank you and thank you flamingm0e - I’m up and running with the NGINX reverse proxy!!

1 Like

How does setting up zwave & zigbee work in a docker container?

In my old setup I had udev rules set to make the USB locations for zwave & zigbee sticks persistent. I have transferred the udev rule file to the NUC but how does that interact with the docker container?

How do I have to modify my docker run file to expose those USB devices to the container using the udev rules?

Do I still use ACM0, USB0 & USB1? or do I use the udev entries?

And how do you define a configuration file location? is it even necessary to define it?

Here are my old zwave and zigbee entries:

zwave:
  config_path: /srv/homeassistant/lib/python3.6/site-packages/python_openzwave/ozw_config
  usb_path: /dev/ttyUSB-ZStick-5G
  network_key: "..........

zha:
  usb_path: /dev/zigbee
  database_path: /home/homeassistant/.homeassistant/zigbee.db

Obviously, I need to change the zigbee db file to my new config location.

And while your here…:grinning:

Does the latest docker container install the latest python version (3.6)?

It’s the official home assistant docker, which indeed uses 3.6

You can use whatever you like. You just map them through using a -v bind mount to the path

what for? I have never defined a config file at all for my zwave

I had some issues when I first installed zwave. It would fail on setup of zwave because it couldn’t find the configuration file. I’ve always had to define it specifically. Which is strange because I’ve always used a standard hassbian install.

Hopefully the default will work this time around.

And thanks for the info on the USB. I figured that was the case. I just thought I’d verify.

move your zfgwhatever.cfg.xml file to the root of the configuration directory and it should just work out of the box

2 questions for you.

  1. Which processor do you have in the NUC? I have an old Gigabyte ITX board running an Intel Atom D525 with 4GB RAM, 32GB SSD, and a few TB of storage space and it is still cutting it (it is ~8 years old). Right now I’m running a file backup/hosting server (what it was originally purchased for), HA, Node-RED, Ghost, Pi-Hole, NGINX (reverse proxy and custom HTML5 media server interface), and various other services, but I’m considering getting a NUC to offload everything but the file/backup server from it but haven’t made up my mind on a processor (also, I still haven’t really run into situations where HA slows down so I’m not in a rush).

  2. Why are you running Portainer inside the stack? I’m just starting to use docker (and set up Portainer to play with it), and I feel like Portainer should be in its own stack because it doesn’t depend on anything in the HA stack and HA doesn’t depend on it. If you have to restart the whole stack, you shouldn’t have to restart Portainer.

It is an i3 model. I will have to get model number later.

I like to stay current on all my containers, so I will tear down all containers, and restart the entire stack for the latest versions. I don’t have it marked as a dependency on anything in my compose file so it’s not necessary to include in there, but it’s easier to run the entire thing.

Another thing is that this NUC is ONLY for my home automation stack. I have other servers for other purposes. This instance of portainer is ONLY for my home automation stack.

Thought I’d drop this here for Docker and Node-Red users: since the official Node-Red Docker image isn’t compatible with the new 0.3.0 release of node-red-contrib-home-assistant, I created my own updated image that runs Node.js v8.9.4 to make it compatible: marthoc/node-red. It is currently for amd64/x86_64 only, as that is my use case.

It’s a drop-in replacement for the official image with the only caveat being it does not automatically install node-red-node-msgpack, node-red-node-base64, node-red-node-suncalc, or node-red-node-random (so if you use these nodes, you’ll need to install them manually). You can point it at your old config directory and it will maintain all your config, flows, and installed nodes.

2 Likes

Out of curiousity, does it matter which VLAN you run an Unifi controller on so long as you change the inform address of any APs?

I currently run a SBC (old thin client) for my Unifi Controller but would rather discontinue it in favor of the Docker… BUT I originally sat the SBC on x.x.10.x and the docker computer is on x.x.30.x. The Unifi APs have access to all 4 of my VLANs but not sure.

What do you think? I may just trial and error but do not want to break anything.

The unifi controller needs to be on the same vlan and broadcast network as the APs

If your firewall rules allow the traffic, it should work if you manually update the inform address via SSH.

My setup is similar to yours, APs on main VLAN and controller on the IoT VLAN. I have an Edge Router X though and the DNS server setup has a Unifi controller address block that lets you advertise the address to any VLAN. In my case, I just added the address, opened up traffic on the firewall between controller and APs, and it just worked.

1 Like

I’ll give it a shot, thank you. I use pfSense and don’t think it has anything quite like that setting.

I was also thinking I could just change the Untagged VLan running to the APs. not really desirable but possible I think.

Or, just switch over to a rPi controller.

the xml file has always been in the config directory but thats not the congif directory that it needs when it asks for the “config_path:”

I’m sure it will work fine now tho. :wink:

Hi it’s me again…

I’m trying to setup letsencrypt.

here is the docker command I’m using (redacted):

docker run --cap-add=NET_ADMIN --name=letsencrypt -v /docker/letsencrypt/config:/config -e PGID=1000 -e PUID=1000 -e EMAIL=<my email> -e URL=<mydomain>.duckdns.org -e VALIDATION=http -p 80:80 -p 443:443 -e TZ=America/New_York linuxserver/letsencrypt

This is basically the command from the docs at the docker hub page for the image.

I have my router port 80 forwarded to port 80 on my docker host and router port 443 forwarded to port 443 on the host. When I run the container nothing happens.

I have my home assistant docker running on port 8124.

Shouldn’t I get some keys generated in the config file specified above?

Is there an issue with the command?

I don’t use letsencrypt docker container, as I am running my reverse proxy on a VPS outside of my network and handle letsencrypt with a version installed from my package manager repos.