This might be a stupid question - If external modules are loaded/installed dynamically by components with said components work if there is no internet connection

So, say a component uses an external module available via pypi project and is included in the REQUIREMENTS or DEPENDENCIES lists within the component. According to the development documentation we’re supposed to keep import statements within functions because “they’re loaded on-the-fly.” Now, does that imply

  1. That the component isn’t usable without internet since it has to load the module from pypi each time the component is used

OR

  1. Is the pypi module installed on the system hosting HA when the component is first installed and imported dynamically from the host-system (ex: Linux)?

I know this might be a really stupid question because I have a feeling the answer is obviously scenerio 2, but I figured I should ask anyway for any other newcomers that might be curious when reading over the development docs.

Definitely not a stupid question. I’d like to know the answer as well to gain a better understanding of how platforms handle dependencies.

Dependencies that use pypi are loaded locally into your python library on the device housing HomeAssistant.

Modules that connect to external systems like Ring, DarkSky, SmartThings will fail to work when your internet connection dies. Only local instances of devices like ZWave or devices reporting to a MQTT server on your LAN will work while your internet connection is offline.

Does this occur every time Home Assistant is restarted or only upon initial installation?

In other words, is the dependency loaded upon installation and then never handled again or is it loaded after each restart?

So, I have a custom tuya component that forgoes their Chinese cloud (like the HA component does right now) & is capable of controlling the devices locally over wifi and it’s available on pypi. After I initially install the custom tuya component (that depends on that pypi module) will it work without internet?

Only once on your first start after an update (if needed) or new install.

Once it’s installed it’s hosted locally on your device and shouldn’t require internet connection if it doesn’t need to make internet requests.

2 Likes

@firstof9
Wait, if I run hassio does this still apply? I only ask because I just read that add-ons and such are just docker instances, which pip install the needed modules whenever the add-on is used. Doesn’t that mean that it’s fetching the module from pypi each time or is it still retrieving it from the core system (os). And follow-up question, if the add-ons are able to utilize pip to access the system’s modules why can’t a user have access to pip install/sudo apt-get on a hassio installation? Is there a way to enable access to pip/sudo apt-get at the system level?

So far as I know the container will always be running in docker unless you stop the addon. In fact the addons will work independently of hass.io anyway. So if for instance you have stopped the home assistant container, you can still ssh into it. Or access terminal or anything else you have installed as an addon.

That didn’t really answer my questions…

Docker containers are self contained and designed to encapsulate and isolate an application. They are not really intended for you to connect to and install software. Even if you can attach to them they are not all running Debian. For example the core SSH addon is running Alpine linux, so you can you apk add but that will install the package into the SSH docker container not the Home Assistant one.

If you need to add custom dependencies there is an addon that I think is supposed to do that.

http://<your.hassio.ip>:8123/hassio/addon/core_custom_deps

Although it appears not to be documented very well, so i dont know how supported it is in truth.

In that case it sounds like I should be opting for a hassbian install instead of hassio then, right? I know hassio is suggested for beginners, but I’m just thinking about being able to pip install modules to have them available for custom scripts that would be called via the shell_command or command_line components. Does that make sense?

Since Docker containers do not have persistent storage unless configured to do so, yes they’d re-download every time you started the addon docker. One would hope the people designing and distributing the docker addon containers would include the deps into the image already.

This entire conversation has convinced me to just go with the hassbian install route.