Question on difference running hass -c or systemctl

so I have some weird difference between running the systemd serivce and starting hass from command line.
So thinking it is all because of user rights, tried lot of things, but see no difference in behavior after changing around.
Main issue:
starting system service, it’s fails to start integrations like Synology DSM and PiHole. But when i start from hass -c /homeassistn etc (same dir as system service uses), it all works fine with these integrations. Chowned and Chmodded the hell out of all dirs, but seems to no avail. even tried system service to run as root, still not working.

Any body, explanation/ideas of what could be wrong here on my raspberry installation that causes this difference?

A good start
would be posting your setup and log files
as per “How to ask a good question”

i.e.
O/S and version
using docker ? which , version , config?
systemd service config file
log files (syslog and home assistant)
how was HA installed and by what user?
file perms?

etc
etc

made new python venv with user pi… no issues anymore. Just dont get it how things differ, think will create new venv with each update from now on :slight_smile:

Honestly,
Run home assistant docker image inside a local container.

To do this,

  • install docker-compose
  • create a simple docker-compose.yaml that defines the service in which docker hub home-assistant image will run
  • issue command: docker-compose up -f PATH_TO_YAML_FILE -d

JOB DONE!

I really dont understand why std users just don’t use docker. So easy and self contained. No need to setup Pvenv or worry about your OS config. And, to keep up with the latest home-assistant , one line change to that yaml file does it all.

I dont consider myself as a Social Transmitted Disease, but I’m no linux expert either, hence my initial question.
On your docker suggestion, i have no experience at all on docker and i do want some customisations on my installation and didnt see docker as easy as just fumbling around on python venv.
few things that held me from docker:

  • zwave usb stick, should be addressable
  • mariadb on seperate server
  • nfs mount to store my configuration and logs, not wanting to hammer the sd card and i will always have my config ready when i ever redo the whole install.

All should be workable with docker, but i didnt see docker as beneficial as you are picturing it.

Docker container is , in the context of a HA one , just a box inside which HA runs isolated from your host system, with all the dependencies it needs. So it isn’t affected by what you do on the host or in other environments - well to some degree .

Further, anything HA detects or is configured to work with, is just the same as if you were running inside a fiddly Pvenv. Zwave included.

Containers communicate with your host and other containers via ports on a network, either your host network or virtual ones created by the containers and shared between them. So crucially, anything that uses ports for access - mysql or mairadb etc, can be run in another container, isolated from HA container AND your host. Beautiful!!

Lastly, VOLUMES are virtual disks , file systems, that can be used for individual container storage or storage shared between containers. So for example , you can mount nfs shares in those volumes just as you would ‘normally’. Brilliant.

So docker is the panacea to addressing conflicting software, compatibility, fiddly venv, and evolving configuration changes where a ‘change here’ breaks "something there’

We’ve all been there!

Just run key apps or services, in their own docker containers.

Ive simplified things here, in laymen terms, but that is the essence of why we use docker.

Like all great inventions from history, the conceptual solution , is so simple!

Google is your friend!

Then when you have a basic awareness come back here for fixes to your attempts. I’d be happy to help.