WTH is going on with Docker/Container installs and third-party integrations that need Python modules?

Hey folks,

WTH is happening when using the official HA Docker container and wanting to use third-party integrations, from HACS, that also need Python modules?

Currently, this seems to be broken and lots of people hitting issues. There is an open GitHub issue here:

The consensus seems to be that something about the official container has broken many integrations that need to install their own Python modules. There unfortunately doesn’t seem to be a consistent workaround.

Apologies if this sounds harsh (not meant to) but if you choose an advanced installation method like Container are you not responsible for maintaining the OS and dependencies?

https://www.home-assistant.io/installation/#advanced-installation-methods

Normally yes! But:

  • It isn’t a dependency related to the installation method that is at issue, this is python module dependencies within integrations, that Home Assistant itself handles internally (without user actions required) when initialising the integration.
  • It’s something that worked with all versions prior to 2024.10.x. It’s not a feature that this particular installation method was missing.
  • Is something that ostensibly shouldn’t be affected by the installation method—it’s the same operation/process a user would follow regardless of installation type, as it’s code internal to Home Assistant, and actions they would perform all from the UI, not requiring any advanced shell or command-line access.
10 Likes

I don’t recall the Docker image being labeled as “you get to keep the pieces if something breaks” at the time I set it up… The website has evolved since then.

3 Likes

Thanks for creating this thread. I was wondering if this was just me.

From what I could gather from that other thread there may be a few separate problems happening together. One was the switch from pip to uv where uv is / was missing some needed functionality.

The other is/was a problem with the code that detects if HA is running on bare metal or not. It looks for docker environments but not the others.

The third is that the custom integration code is running before the path to the dependencies gets fixed.

So there are three workarounds:
set the PYTHONPATH environment variable to /config/dep in the configuration that loads the container
or
symlink /config/dep to /usr/local/lib/python[installed python version]/site-packages
or
create the file that the code that is checking for virtual environments is looking for ( /.dockerenv).

I had been using the PYTHONPATH workaround which worked to fix frigate, but there were other custom integrations that were still misbehaving and when I upgraded to 2024.12, all hell broke loose with the custom integrations.

I ended up removing that workaround, and for the dependencies that were missing, I just opened a shell into the container, pip installed them, and soft restarted HA (didn’t kill the container). I imagine things will break again if the container terminates and is respawned.

For the longer-term workaround, I think I’m going to go with the last option since that seemed to work for everyone in that thread.

1 Like

That work around worked for me – thankfully the docker image uses s6 which allows us to provide it with pre-init scripts that run before the entrypoint.

Create this script and put it in /etc/cont-init.d (You’ll need to create that folder):

#!/bin/sh
touch /.dockerenv

If you’re using kubernetes, you could create a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: homeassistant-pre-script
  namespace: homeassistant
data:
  pre-script.sh: |
    #!/bin/sh
    touch /.dockerenv

Then after applying it, mount it in your deployment:

          volumeMounts:
            - mountPath: /etc/cont-init.d/
              name: pre-script
              readOnly: true
      volumes:
        - name: pre-script
          configMap:
            name: homeassistant-pre-script
            defaultMode: 0755
1 Like

Another way in kubernetes is to just mount an emptydir

        volumeMounts:
        - name: docker-env-empty
          mountPath: /.dockerenv
          subPath: .dockerenv
      volumes:
      - name: docker-env-empty
        emptyDir: {}
2 Likes

Without knowing much Python and really understanding what is going on with the libraries, the following fixed the issue for me on Podman after an HA restart. Perhaps the similar workaround can be applied on Docker:

podman exec -it homeassistant bash -c "cp -r /config/deps/* /usr/local/lib/python3.13/site-packages"

Replace the container name “homeassistant” with your container’s name.

1 Like

First, I know little about python, however:
I had similar issues, and googling the errors I had indicated some of the upgraded python packages in 24.12 needed their dependencies built against the upgraded package. So even though they existed they errored out. I removed the packages in my activated venv, for me it was pandas and numpy. Then reinstalled with --no-cache-dir option, to pull in proper binaries, and rebooted. It worked for me YMMV.

Nice, I thought about using emtyDir but didn’t think this would work as the file wouldn’t exist.

I used:

    - hostPath:
        path: /tmp/.dockerenv
        type: FileOrCreate
      name: dockerenv

But yours is nicer.

It’s strange that setting PYTHONPATH in the container environment doesn’t work for you, as this was for me the simplest fix. Since I was already using systemd to manage the container I was able to just add the additional environmental variable to the command -e PYTHONPATH=/config/deps and restart the service.

Like you I had previously been messing about with pip installing the ‘missing’ modules from a container console but as you noted the second you restart the container those changes are lost.

I’m not happy that the OFFICIAL homeassistant container is broken (not sure how breaking of an official image is somehow being spun as a user issue …) but setting the environmental variable is, for now, a reasonable workaround. More reasonable to me than having post-startup scripts or copying files from location to location at container boot etc. Plus it has the advantage of working even when installing additional integrations without the whole container needing to be restarted.

The supplied container image includes the OS, so no it’s not the end-user’s responsibility to manage the OS in this case. The supplied image should have a correctly configured and working OS and right now it does not.

There’s a PR here that’s pretty close to getting merged that I believe will address this as it updates the “definition.” so to speak, of whether HA is running in a container or not by also checking for other container runtimes

Also works for running in podman. It didn’t like the subPath so I removed it and now it seems fine.

Why did the behavior change? I have been running HA in my kubernetes environment for 3 years. Never had an issue with it till recently. Was bringing my system up to date and it broke HARD. None of the custom_integrations install the requirements… this has worked for SOOO long. What changed? why? So frustrating.

For what it is worth, this looks to be resolved. I just removed my work around and upgraded to 2025.1, no issues yet, so it’s working for me anyway. Thanks to the team for resolving this!

1 Like