What's up docker? How to keep your containers up to date

What it looks like on my own hass instance.

1 Like

Amazing! My entire network is made up of Docker containers running on many proxmox LXC’s and a Pi4. This will make it much easier to see when updates are available.

I’ll test this shortly and help with feedback. Thanks for your hard work!

1 Like

Now that I see this thread alive again I have been having a problem running this for a while now but never got around to trying to fix it. I’ve actually had it disabled for a loing time now so I can’t remember the original issue.

So I updated the image for wud today and when it restarted I keep getting these errors in the containers logs:

{"name":"whats-up-docker","hostname":"NUC","pid":1,"level":30,"msg":"What's up, docker? is starting","time":"2021-04-27T21:35:13.815Z","v":0}
{"name":"whats-up-docker","hostname":"NUC","pid":1,"level":30,"msg":"Init DB (/store/wud.json)","time":"2021-04-27T21:35:13.817Z","v":0}
{"name":"whats-up-docker","hostname":"NUC","pid":1,"level":30,"msg":"Init Prometheus module","time":"2021-04-27T21:35:13.824Z","v":0}
{"name":"whats-up-docker","hostname":"NUC","pid":1,"level":20,"msg":"Start image metrics interval","time":"2021-04-27T21:35:13.832Z","v":0}
(node:1) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'localeCompare' of undefined
    at /home/node/app/node_modules/sort-es/lib/index.cjs.js:120:46
    at /home/node/app/node_modules/sort-es/lib/index.cjs.js:76:36
    at Array.sort (<anonymous>)
    at Object.getImages (/home/node/app/store/index.js:150:22)
    at populateGauge (/home/node/app/prometheus/image.js:12:11)
    at Object.init (/home/node/app/prometheus/image.js:61:5)
    at Object.init (/home/node/app/prometheus/index.js:15:11)
    at main (/home/node/app/index.js:14:16)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
/home/node/app/node_modules/sort-es/lib/index.cjs.js:120
        return sort(fixString(first, options).localeCompare(fixString(second, options)), options);
                                               ^
TypeError: Cannot read property 'localeCompare' of undefined
    at /home/node/app/node_modules/sort-es/lib/index.cjs.js:120:46
    at /home/node/app/node_modules/sort-es/lib/index.cjs.js:76:36
    at Array.sort (<anonymous>)
    at Object.getImages (/home/node/app/store/index.js:150:22)
    at Timeout.populateGauge [as _onTimeout] (/home/node/app/prometheus/image.js:12:11)
    at listOnTimeout (internal/timers.js:554:17)
    at processTimers (internal/timers.js:497:7)

Here is my docker run command:

docker run -d --name wud --network=host --restart unless-stopped -e WUD_LOG_LEVEL="DEBUG" -v "/var/run/docker.sock:/var/run/docker.sock" -v /home/finity/docker/wud/store:/store -e WUD_WATCHER_DOCKER_LOCAL_CRON="1 * * * *" -e WUD_TRIGGER_MQTT_MOSQUITTO_URL="mqtt://192.168.1.11:1883" -e WUD_TRIGGER_MQTT_MOSQUITTO_USER="my_user" -e WUD_TRIGGER_MQTT_MOSQUITTO_PASSWORD="my_password" -e WUD_TRIGGER_MQTT_MOSQUITTO_TOPIC="wud/image" fmartinou/whats-up-docker

any idea what I need to do to fix it?

I’d like to start using it again and the new binary sensor looks really useful.

I see that you’re persisting the wud state in a mounted volume.
Because you just upgraded from an old version to the latest one, you’re probably facing errors because the storage format has been changed.
I suggest you to clear the content of the /store volume and restart the service.

Please also notice that if you want to try the new hass integration, you need to use the ‘develop’ tag and not the ‘latest’ one.

Right I just wanted on get this running on latest before I moved to the dev version.

I think it seems to be working now.

Now I just need to figure out how to filter the containers to only watch for the correct new version types.

I believe I run the “latest” tag of every container except HA related ones and those are “versioned” containers.

My advices:

  1. Try to find immutable versions for the containers you run and get rid of latest tags as much as possible.

  2. Then using Docker labels on your containers, you’ll be able to fine-tune the expected format of the newer tags.

Below some examples for popular services:

version: '3'

services:

    bitwarden:
        image: bitwardenrs/server:1.20.0-alpine
        labels:
            - 'wud.tag.include=^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)-alpine$$'

    duplicati:
        image: linuxserver/duplicati:v2.0.5.1-2.0.5.1_beta_2020-01-18-ls96
        labels:
            - 'wud.tag.include=^v[0-9]\d*\.[0-9]\d*\.[0-9]\d*\.[0-9]\d*-[0-9]\d*\.[0-9]\d*\.[0-9]\d*\.[0-9]\d*.*$$'

    homeassistant:
        image: homeassistant/home-assistant:2021.4.6
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    homeassistant_db:
        image: mariadb:10.5.8
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    mosquitto:
        image: eclipse-mosquitto:2.0.10
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    nextcloud:
        image: nextcloud:21.0.0
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    nextcloud_db:
        image: mariadb:10.5.8
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    photoprism:
        image: photoprism/photoprism:20210426
        labels:
            - 'wud.tag.include=^([0-9]{8})$$'

    photoprism_db:
        image: mariadb:10.5.8
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    pihole:
        image: pihole/pihole:v5.7
        labels:
            - 'wud.tag.include=^v(0|[1-9]\d*)\.(0|[1-9]\d*)$$'

    plex:
        image: linuxserver/plex:1.22.3.4392-d7c624def-ls44
        labels:
            - 'wud.tag.include=^[0-9]\d*\.[0-9]\d*\.[0-9]\d*\.[0-9]\d*-[0-9a-z]*-ls[0-9]\d*$$'

    portainer:
        image: portainer/portainer-ce:2.1.1-alpine
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)-alpine$$'

    traefik:
        image: traefik:2.4.8
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'

    whatsupdocker:
        image: fmartinou/whats-up-docker:3.4.0
        labels:
            - 'wud.tag.include=^([0-9]\d*)\.([0-9]\d*)\.([0-9]\d*)$$'
1 Like

I don’t understand what you are saying here.

Why would I need to get rid of the “latest” tags? Why is using those bad?

And if I use the actual version for each image in my docker config wouldn’t I need to manually change that every time I update a container so it pulls the newer image?

1 Like

With latest you have no guarantee that something which was working few weeks ago is still working if you redeploy it; that’s a major issue for me.

Version immutability is the key factor to get a stable system.

Here is a good blog post describing why latest is a bad practice:

That’s the main reason why I’ve coded WUD and why I couldn’t rely on tools like Ouroboros or Watchtower.

WUD is specialized in semver version comparison so if you only use latest versions, it’s useless for you.

And if I use the actual version for each image in my docker config wouldn’t I need to manually change that every time I update a container so it pulls the newer image?

Yes, if you use Docker Compose, your docker-compose.yml file can easily be tracked on Git.
So you’re always sure to have a descriptor which is working.
And you can easily rollback in case of problems…

The update process is really easy too and you’re in total control:

  1. Change a tag version in a compose.yml file
  2. Run docker-compose up -d --remove-orphans

…And let Docker Compose pull the new image, remove the old container and deploy the new one!

Wow, I guess I’ve been really lucky then.

I’ve used “latest” ever since I started using HA in docker over two years ago and I use Portainer to occasionally pull a new image of all 28 containers I’m running and I’ve rarely had any issues.

I have had to downgrade a version of HA because of breaking changes once or twice in that time but I always revert to using the “latest” tag after HA gets their issues sorted out.

And I don’t use docker-compose. I just use a docker run command for each individual container I run.

So there is no way at all for WUD to know if there is a new “latest” image tag?

I don’t deny that latest tags can be convenient.
It’s all related to the risk-benefit balance.

Personally, one of my last bad experiences was when Mosquitto upgraded from 1.x to 2.x (major version so breaking changes) and I had to change lot of things in my configuration files.

Anyway, anyone is free to operate as he wishes :slight_smile:

So there is no way at all for WUD to know if there is a new “latest” image tag?

Not currently.

Actually, it was implemented in very old versions of WUD, at a time where only the Docker Hub Registry was supported.

After I had added the support for other registries (ACR, ECR, GCR…), I dropped this feature because it wasn’t possible for those ones.

To explain, the official Docker Registry API is the standard implemented by Docker, Amazon, Google, Microsoft registries.
Unfortunately, this API doesn’t allow to get information about the date when a tag has been pushed so there is no way to know when a latest tag has been overwritten.

In the other hand, there is a non standard Docker Hub API which can return the tag last_pushed date.
It was what I used in early versions of WUD, so I could put it back in WUD.
Just please notice that it will work only for images pulled from Docker Hub and not for images from other registries.

Let me know if you’re interested!

Yes, please.

as far as I know I’ve only ever used images from docker hub. But I’m really not a Docker expert (I bet you couldn’t tell :laughing:) so I guess I don’t really know the source of the images.

I just find an app I want to use recommended from here or other internet sources and I think most of them send you to docker hub to get the image.

And to add to that Portainer somehow knows that when I tell it to pull the latest image that the new image is newer than the old image. To be clear tho, I almost never really use the “latest” tag in my docker commands. I almost always just use the image without any version tag - for example “fmartinou/whats-up-docker” leaving off “:latest”. So even without that tag Portainer can figure out if there is a new version to pull.

Maybe it uses the Docker Hub API too?

I can confirm that your images come from the Docker Hub :slight_smile:

The full qualified name of an image is

hostname[:port]/username/repo_name[:tag]

The Docker engine is configured by default to enforce:

  • latest tag when missing
  • index.docker.io hostname if missing

We both use Opensource Community services (Home-Assistant for example) so it’s logical to produce/consume images from/to the Docker Hub.

But other kind of users (companies at the first place) have other requirements and often rely on other registries allowing them to host private images.

And to add to that Portainer somehow knows that when I tell it to pull the latest image that the new image is newer than the old image.

When you pull an image:

  1. Docker gets the manifestof all the layers defining the image.
  2. Docker compares which layers are missing on your Docker host
  3. Docker downloads & unpacks the new layers

So Portainer doesn’t know ahead of time if there is a new version.
It just downloads the new layers.

=> I keep you informed when latest tag management is implemented in WUD.

1 Like

Thanks for the extra work you are putting into this.

If it’s too much then don’t feel you need to do it just for me.

If it’s too much then don’t feel you need to do it just for me.

To be honnest, you’re not the first one to ask for that…
It’s now time to do it :laughing:

1 Like

Hi fmartinou, I’m hoping you can help with accessing multiple remote Docker instances. Would like to install WUD in a single container and have it monitor the following hosts which are all on the same internal network.

  • Pi4 hosting seven docker containers
  • Pi3 hosting one docker container
  • Several LXC containers each hosting one or more docker container.

It appears TLS has to be configured to communicate with remote Docker instances. I’ve found the following site that appears to help with configuring tls.

However I can’t find info on one client accessing multiple remote Docker daemons. Can you recommend how to Do this?

Hi @mwolter ,

Thank you for your interest :slight_smile:

Currently, WUD allows to watch remote Docker hosts over plain HTTP only.

Please also notice that TLS is not required to expose the Docker Daemon Socket over HTTP.
The need to upgrade to HTTPS really depends on your situation; is it acceptable to expose the Docker daemon without encryption and authentication ?
It’s up to you :wink:

To monitor multiple Docker hosts with WUD, you just need to declare 1 configuration / host to watch.
Example:

WUD_WATCHER_LOCAL_SOCKET="/var/run/docker.sock"
WUD_WATCHER_MYPI3_HOST="192.168.1.10"
WUD_WATCHER_MYPI4_HOST="192.168.1.11"
WUD_WATCHER_MYPI5_HOST="pi5.local"
...

If you want WUD to support TLS for remote Docker hosts, feel free to raise an issue on the Github; I’ll be glad to implement it!

Hi @fmartinou,
Yes I would be interested in this. Looks like it would be very easy for a hacker to gain root access to the docker host without TLS.

In case anyone wants to do the same (one client to multiple servers), all you have to do is generate a CA certificate and key once, on the first server, then copy and reuse those on each server when creating the server and client keys.

This site has easy to follow instructions on how to generate the CA, Server and Client keys. Portainer Remote Host Management - TLS protected! -

Working on a script to further simplify the process and will post it here when ready.

Hi @finity ,
I have a first version supporting non semver tags (in addition to semver tags).
You can give it a try by running the latest fmartinou/whats-up-docker:develop version.

To explain a little bit:

  • WUD now compares the digest of the running image with the digest of the image manifest from the Registry
  • If the digests are not equal, WUD reports that the image can be updated

Thanks for your feedback!

N.B. The version is non backward compatible => Please remove /store/wud.json before upgrading

Hi @mwolter ,
I have a first version supporting TLS
You can give it a try by running the latest fmartinou/whats-up-docker:develop version.

The documentation is not yet published but you can find the draft here

Simple example of the new env vars to configure TLS:

WUD_WATCHER_MYREMOTEHOST_HOST="myremotehost"
WUD_WATCHER_MYREMOTEHOST_PORT="2376"
WUD_WATCHER_MYREMOTEHOST_CAFILE="/certs/ca.pem"
WUD_WATCHER_MYREMOTEHOST_CERTFILE="/certs/cert.pem"
WUD_WATCHER_MYREMOTEHOST_KEYFILE="/certs/key.pem"

Thanks for your feedback!

N.B. The version is non backward compatible => Please remove /store/wud.json before upgrading

Is there a way to enable tracking of containers that aren’t running?