Installing Home Assistant Supervised on a Raspberry Pi using Debian 12

Hello @Patrick010, so I simply re-run the supervised installer and follow each step from Step 1 through Step 4 and I should be good to go? Are there risks re-running the supervised installer can fail? Or is it just a basic operation that involve no risk at all? I would do it from Terminal SSH from MacOS

You won’t lose everything if you have a locally saved copy of your backup on your PC or Laptop.

Even if the backup won’t restore for some reason you can pull all the yaml files out of your config and manually restore them and then just reinstall all the add-ons you use.

But normally if I just re-run the supervised installer, I should not have to re-install Home Assistant afterward? Everything should still be in its place after re-running the supervised installer? Correct? I just want to make sure if I need to plan a whole HA setup.

running the installer does reinstall home assistant - that’s what the installer does. But it won’t touch your data.

If I understand correctly, it does mean that it’s the same as starting a fresh install? But data remains? Which data remains? All add-ons with configurations made? Automations? Please clarify, I’m lost as to what re-running the supervised installer implies.

It implies installing the software running HA again. All your configuration and aettings and addons remain. It is totally non destructive to your customisations.

It is the most flexible type of HA there is that allows you to do the most - the person or people saying that may not have the knowledge to be able to support their own installtion properly. I am not sure why people says “it is the worst version” - I would swear by it. Once you have learned enough, there is nothing you cannot do with it. Mine has been running like a top for a long time.

Keep in mind that with the RPI however, people typically run into a scenario where the CPU decides to just “go on vacation” and the HA stops functioning. Nothing appears in the logs. This used to happen to me about once every 15 days or so. No known resolution exists yet. This automation resolved my issue - I have two helpers on a dashboard which allow me to easily select the day and time each week to do the reboot on a weekly basis - this has been working perfectly for months now -

alias: "Host Reboot (Per Day and Time)"
description: ""
trigger:
  - platform: time
    at: input_datetime.time_for_weekly_reboot
condition:
  - condition: template
    value_template: >-
      {{ states('input_select.day_of_weekly_reboot') ==
      now().strftime('%a') }}
action:
  - service: hassio.host_reboot
    data: {}
mode: single
1 Like

You are my hero! I’m getting so tired of reading everywhere that bullshit. Supervised daily gets punished here to death just by people with no clue.
It deserves much more love, than it gets around here. Sure, it’s not suitable for a noob, but it per definition is NOT BAD. Period.
It also bothers me, that it does not enjoy any defense from the mod’s/admins here.

So one more time: Supervised is even good, as all the other install methods. No better, no worse. It just is meant for skilled users. In their hands, it is very good.

1 Like

@donparlor. Unless you are connecting hardware (wiring etc.) the wrong way, there is nothing you can break that you just cannot fix, so go ahead an be a bull in a china shop, that is how people learn! There is alot of information all over the web (which is USUALLY correct :slight_smile: ). Therefore, if you are worried, take a step back and spend some time figuring out how to do a full backup (and test restoring it to make sure that it does work) of everything you have. Then whenever you try to make any dramatic changes - GO FOR IT! You can always restore and try again! It sounds tedious and it is (and yo mighty have to buy a little more hardware such as an additional SSD, etc.), but that is the safest way to learn! I work with computers for a living so I am lucky to have the aptitude that if I bang my head on the wall enough times I can actually break through any issues (so to speak). I set up HA Supervised and a bunch of other daemons etc. on my RPI having come from a software background of only writing windows related real-time low latency trading systems on Wall Street (years ago). So, I used this to teach myself Linux and Python. I would not suggest this for people that do not have the time and patience (or aptitude, not to imply that you don’t I am sure you do). I went slowly - broke alot of things - totally trashed them - but I started with other RPI projects first and learned how to back everything up as well as make copies of the SDCard (which you really should use an SSD instead, the RPI is glacial especially with bootups with on an SD Card). Then as I slowly built it into something useful (with daemons that turn on fans and then turn them off when/if the CPU gets hot, etc. (mine doesn’t any more as the OS is now 64 bit and I have a nice heavy heat sink type of a case) - then when you try to add something and it trashes the thing at least you can then restore from a backup instead of starting from scratch, etc. I didn’t even know about HA at first, I had purchased a weather station (AmbientWeather WS-2902C (Here is mine)) and discovered weewx to additionally transmit weather data all over the place (to National Weather Service, Weather Undergound, Awekas (Awekas which is my favorite UI for my weather station although the web site is in Germany lol!)). All of the above runs on the same RPI 4 which typically runs about 5% CPU :slight_smile: outside of HA - even with HA running. I was just lucky that there is an integration for the AmbientWeather weather stations - it probably was better that I didn’t know about HA back then or I would have dived right in (maybe it would have been fine I don’t know)! When I started with HA about 1 1/2 years ago it was like the wild west, HA Supervised was not very stable but everything has improved dramatically - and I mean really dramatically - since. So those are my two cents, hope it is worth to you more than you paid me!

FYI here is my current setup:

System Information

version core-2023.5.3
installation_type Home Assistant Supervised
dev false
hassio true
docker true
user root
virtualenv false
python_version 3.10.11
os_name Linux
os_version 5.10.0-23-arm64
arch aarch64
timezone America/New_York
config_dir /config
Home Assistant Community Store
GitHub API ok
GitHub Content ok
GitHub Web ok
GitHub API Calls Remaining 5000
Installed Version 1.32.1
Stage running
Available Repositories 1352
Downloaded Repositories 25
AccuWeather
can_reach_server ok
remaining_requests 27
Home Assistant Cloud
logged_in false
can_reach_cert_server ok
can_reach_cloud_auth ok
can_reach_cloud ok
Home Assistant Supervisor
host_os Debian GNU/Linux 11 (bullseye)
update_channel stable
supervisor_version supervisor-2023.04.1
agent_version 1.5.1
docker_version 24.0.1
disk_total 915.4 GB
disk_used 14.9 GB
healthy true
supported true
supervisor_api ok
version_api ok
installed_addons Duck DNS (1.15.0), Mosquitto broker (6.2.1), Samba share (10.0.1), AdGuard Home (4.8.7), Log Viewer (0.15.1), Home Assistant Google Drive Backup (0.110.4), File editor (5.6.0), Terminal & SSH (9.7.0), Core DNS Override (0.1.1), Studio Code Server (5.6.0)
Dashboards
dashboards 5
resources 17
views 30
mode storage
Recorder
oldest_recorder_run April 22, 2023 at 4:43 PM
current_recorder_run May 22, 2023 at 8:19 AM
estimated_db_size 1286.62 MiB
database_engine sqlite
database_version 3.40.1

100% agree. By far the most flexible method even though the devs seem to love to do more and more things to flag it as unhealthy

2 Likes

Hi Vincent. Tbh, I’m a bit confused about what you’re saying and asking. The only ‘problem’ you have is the CGroup warning, right? Everything else is working peachy? So when the warning is fixed you’d be happy? If so, all you have to do is fix your commandline.txt on your Pi and the warning will disappear…

Good evening @KruseLuds, @DavidFW1960 and @Patrick010,

Thanks for all your answers. Let me cover your comments first.

@KruseLuds,

I agree that HA-Supervised is the most flexible type of HA. You’re probably right about the the person or people saying that may not have the knowledge to be able to support their own installation properly. I chose to go with HA-Supervised because of the flexibility and because I wanted to make full use of my Raspberry Pi 4B 8GB by building a parallel NextCloud server. I haven’t done it so far but I expected HA-Supervised to allow me this. About heating issues, I am using an Argon One case, it’s fairly stable. Also, I’m impressed by how supportive the HA community is online and I’ve also acknowledge how much information is all around if you take time to read and get into dialogue with others.

@BebeMischa, it’s true that HA-Supervised is not suitable for a noob, and I certainly was one when I started. But probably as you all are here, I’m quite persistent and willing to learn, and I did accomplish quite a lot in the HA universe so far.

@Patrick010, my main concern was the 2-3 warnings I had from HA telling me about journal and CGroup issue. Journal is solved, there is only the CGroup issue left. I followed the superviser installer steps up to Step 3 (Yes I managed to update OS-Agent to latest version, I’m proud of myself). There is only missing Step 4: Install the Home Assistant Supervised Debian Package. I see you are suggesting to fix the commandline.txt, I will try this first. If it does not work, I will go with Step 4 and Install the Home Assistant Supervised Debian Package. If I must undergo Step 4, I just hope I won’t have to re-install from a backup because as I said before, I once tried to install from a Backup but my 9.5GB backup file never ended up fully deployed and I came to a conclusion that it could mean a dead-end for me if I ever have to restart from a backup…

I’ll keep you updated! Thanks!

Here is my update. I accessed cmdline.txt
$ sudo nano /boot/cmdline.txt

Here is what was inside my cmdline.txt

cgroup_enable=memory cgroup_memory=1 systemd.unified_cgroup_hierarchy=0

After restarting, I tested which CGroup version I had by using:
$ sudo docker info | grep -i cgroup

It returned:

Cgroup Driver: systemd
Cgroup Version: 2
cgroupns

Is there something missing in my approach? Something wrong?

Thanks for your help

@donparlor don’t bother with the cmdline.txt, I don’t even have that file on my machine. INstall the debian package (delete the old one because otherwise it will download one with (1) on the end and then you would just be executing the old file probably). I have done this many times and just restore the config from the latest HA backups - restoring form here ( GitHub - sabeechen/hassio-google-drive-backup: Automatically create and sync Home Assistant backups into Google Drive ). If you did not backup your entire SDCard or SSD (not the same backup of just HA config - link above - but the whole storage medium, OS and all - which you should do as well) onto another duplicate one, which I do, then you can back it up first (even if an SSD) if you boot up from plain raspbian on an SDcard and then plug the items to be copied into the USB ports and then just use the “sd Card Copier” app… - not so elegant but it works. Whenever I do a backup, beforehand I set HA so that it will not start up on boot, by first issuing this command:

sudo systemctl disable hassio-supervisor.service
sudo systemctl disable hassio-apparmor.service
sudo ha host shutdown

Then when it comes back up - from telnet you would just have a prompt and HA does not start up.

Then after I have backed it up, I reboot from the ssd to a telnet command prompt annd then isue these commands which will turn HS back on at bootup:

sudo systemctl enable hassio-supervisor.service
sudo systemctl enable hassio-apparmor.service
sudo reboot now

I do the above so that when I am working with backups the little RPI CPU isn’t overwhelmed with HA starting up on reboot so that I can do other work on the storage first without interruption if I prefer. Then I issue the second set of command and voila - HA is back up and running.

I did look into doing a full 1TB SSD backup directly without the “sd card copier” and I had a hugenumber of steps and that taught me alot - but it wasn’t worth the hassle - here are my old notes to myself on how to do that - and I am sure others on this thread will laugh at me but I was a noob then when I did that, and that is how you learn, so…

Here are my old instructions to myself, which I no longer use (and may be slightly incorrect and is a major hassle - including the fact that Raspbian shrinks and then re-exands the image and I learned how to do that myself manually to back up a 1TB SSD onto an SD Card, etc), but here you can see the path I used to take (and may learn some commands along the way):

(I run a weewx daemon on my RPI outside of HA Supervised as well so ou can ignore that part)

How to completely backup (clone) the RPI SSD:

  1. Optional - only if you want to do step #4, check the source (to be backed up ) SSD for corruption.
    Create a clean SD Card for bootup on the RPI so we can verify the SSD has no corruption (it has to be unmounted to
    be able to do any changed/repairs to it):

    a. In Windows, use “RPI Imager” to create a simple RPI OS SD Card (don’t forget to tell the RPI Imager to create
    an SSH password so you can get into it as you are going headless!)
    b. Allow the RPI to run, we will be shutting it down later -

  2. Required - so you do not crash your whole network as it needs DNS!
    Change the network DNS back to NOT relying on the RPI.

    a. Go into Omada Controller dashboard:
    https://#############################
    b. Under Settings->Wired Networks->LAN -
    Select edit and then change DNS servers from
    **** Recheck adguard, they changed the second number on you once) ****
    “Manual”:
    Primary: 192.168.0.34
    Secondary: 172.30.32.1
    to
    “Auto”
    c. Completely shut down and then restart MASTER (basement PC) so you are SURE is is not relying on
    the RPI for DNS.

  3. Change the default PI settings on the current SSD to NOT autostart Weewx and Home Assistant (in preparation for
    backing it up) - so when any backup is first restored, it can be started up slowly and carefully).

    a. Go into the RPI using VNC. You must shut down both Weewx and Home Assistant Supervised, AND stop them from
    starting up on reboot of the pi - so that when the image is restored to the other SSD you have time to
    restart things in an orderly manner. To execute all the below you can cut and paste this into a command
    window on the RPI:
    # Weewx:
    # To Stop weewx gracefully during current session:
    sudo /etc/init.d/weewx stop
    # To stop weewx from autostarting:
    sudo /lib/systemd/systemd-sysv-install disable weewx
    # Home Assistant:
    # To stop Home Assistant Supervisor gracefully during current session:
    #? sudo systemctl stop hassio-supervisor.service
    #? sudo systemctl stop hassio-apparmor.service
    # To stop Home Assistant Superviser from Autostarting:
    sudo systemctl disable hassio-supervisor.service
    sudo systemctl disable hassio-apparmor.service
    # sudo systemctl daemon-reload
    b. There are still Home Assistant related processes on the RPI (but upon another reboot this doesn’t happen).
    Therefore reboot the pi. Make sure weewx and home assitant are NOT running at all (check these logs - run
    them one at a time):
    sudo tail -f /var/log/syslog
    sudo tail -f /var/log/weewx.log
    c. Once you have made certain of the above, pick logout/shutdown from the menu. Only after the network light
    stops flashing altogether, then power the RPI off completely.

  4. Optional - check for SSD corruption before creating a backup of it -
    (1.5 Hrs - mostly for tail end of 4g.) We must now verify the file system on the SSD has no corruption as
    that is what was causing backup problems in the past. As the PI freezes or is rebooted and/or file systems
    are randomly disconnected from the PI, this causes corruption. We also must do this without havin the SSD
    mounted! So -

    a. Gently power down the PI (if running) and ensure all storage is removed from it.
    b. Put the SD card from #1 into the RPI only, boot it up with ONLY that connected. It will be slow the
    first time. Be patient, then go into SSH and use sudo raspi-config to set up VNC.
    c. Then, go in with VNC. There will be updates, install them all.
    d. Shut down the RPI gracefully then - Reboot it - then it will be faster coming up the second time.
    e. Only after it is booted up and you are back into VNC, attach the SSD (that was ‘fixed’ - to the USB3 port).
    f. Open a command prompt on the RPI, and type:
    sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT
    This will give you an output that shows you the name of the instance of the sd card - for example:
    NAME FSTYPE SIZE MOUNTPOINT
    sda 931.5G
    ├─sda1 vfat 256M /media/pi/boot
    └─sda2 ext4 931.3G /media/pi/rootfs
    mmcblk0 119.1G
    ├─mmcblk0p1 vfat 256M /boot
    └─mmcblk0p2 ext4 118.8G /
    g. In the above case the “sda” is the SSD you had just inserted. The verification is done by partition,
    and you can see the two partitions. To check and repair the vfat partition, (with the names provided in
    the above example), type
    sudo fsck.vfat -p /dev/sda1
    (if it is shown as mounted - it should not be because the OS sees this as bootable but won’t mount it
    because it has already booted - you could issue the command ‘sudo umount /dev/sda1’)
    Then, check and repair the ext4 partition, (with the names provided in the above example), type
    sudo fsck.ext4 -cDfty -C 0 /dev/sda2
    (note this one would most likely show as mounted, as it is not a bootable partition, but if it is shown
    as mounted you can issue the command ‘sudo umount /dev/sd2’ and then retry the above)
    h. When completed, gracefully shut down the RPI and when it is shut down, remove power. ONLY THEN - remove only
    the sd card - leave the SSD in.

  5. Optional double-check to make sure the SSD is still working as expected if there were changes made to the file
    structure (to corrrect errors) in #4 -
    (15 Minutes) Make sure the SSD is still in functioning shape (no file system issues) and then ready (no Weewx or
    HA on auto-restart) for a backup!
    a. Boot up the PI with only the original SSD attached as storage.
    b. Launch VNC and go to a command prompt and issue these commands:
    sudo /lib/systemd/systemd-sysv-install enable weewx
    sudo systemctl enable hassio-supervisor.service
    sudo systemctl enable hassio-apparmor.service
    c. Reboot the RPI. Go into VNC again -
    d. Check the logs, running each of the commands below separately:
    sudo tail -f /var/log/syslog
    sudo tail -f /var/log/weewx.log
    e. Then check your Home Assistant Web Site:
    https://##########
    f. Once you are Ok with the functioning RPI with corrected filesystem - issue these commands:
    # To stop weewx from autostarting:
    sudo /lib/systemd/systemd-sysv-install disable weewx
    # To stop Home Assistant Superviser from Autostarting:
    sudo systemctl disable hassio-supervisor.service
    sudo systemctl disable hassio-apparmor.service
    g. Then issue these commands -
    # To Stop weewx gracefully during current session:
    sudo /etc/init.d/weewx stop
    # To stop Home Assistant Supervisor gracefully during current session:
    sudo systemctl stop hassio-supervisor.service
    sudo systemctl stop hassio-apparmor.service
    h. AS SOON AS ‘g’ IS DONE (HA will still try to restart lol), through the vnc main menu -
    select “log out” down and then when the pop up appears, select “shutdown”, then ONLY AFTER
    all the lights on the PI stop blinking, then UNPLUG the PI.

  6. Required (Takes about 30 minutes)
    a. Reboot the PI with ONLY the MicroSD Card installed.
    b. Plug the T3 SSD and the T7 SSD both into the USB 3.1 ports
    c. Use “SD Card Copier” to make a backup from the T7 to the T3.

  7. Required
    (1Hr, 15Min -to- 1Hr, 45Min) Create an image from the SSD in Windows.
    a. Put the OLD SSD onto the Windows Machine. Make sure you check the drive letter it is assigned.
    (Make sure any Windows desktop widgets that you might have that monitor drive sizes are NOT
    runnning. If you have issues still, come back to this uninstall any Samung SSD drive software
    on your windows machine that might be interfering with the RPI SSD’s we are manipulating as
    well, and retry the below steps (ugh!))
    b. Launch win32 disk imager in windows (can’t find any newer software!):
    i. “image file” should be the name of .img on R: (a drive with more than one gig free space)
    NOTE - this utility DOES NOT put the “.img” extension on the file name for you. So, you will have to add
    that at this time,
    OR rename the file just before step 13d below…
    ii. “Device” drive letter should be the drive letter of the SSD that was mounted from the RPI now on the PC
    iii. Make sure “read only allocated partitions” is checked
    iv. click “read” - it will take about an hour and a half but the .img file will finally be created.

  8. Required
    As we are done with the old SSD for now, we must get the RPI back up and running, publishing weather
    station data etc! Delays are BAD! Therefore -

    a. Do steps 5a through 5e. When finished,
    b. then do step 2b to adjust the dns…
    c. then just leave it up and running for now, and
    d. reboot your PC top mak sure the DNS is Ok…

  9. Required (this will take only about 15 minutes)
    Time to shrink that big image - from 1 TERABYTE to about 34MB!

    a. Launch Windows Powershell as admin.
    b. type:
    ubuntu
    …and you will be at the ubuntu prompt!
    root@master:~#

Note, when you are ubuntu (use this later) keep in mind that if you type this it will launch an explorer windows
allowing you to just drag and drop files anywhere you want:
./explorer (not right - what should it be?)

c. In this case I had downloaded the pishrink utility from github and just copied the pishrink.sh file to the
directory (root of “r” where I keep my RPI images for shirinking before deletion). So, at the ubuntu prompt,
then type:
cd /mnt (to access the mount points - drive letters on your machine!)
ls (will show you your drive letters!)
cd r/ (just gets you to the root directory of drive “R:”

d. The image file I created that was already there was 2022-05-01_RPI.img, and now I had pishrink.sh the same
directory (see number 10), so from a command prompt (which showed as ‘root@MASTER:/mnt/r’) just type:

( I only use this one, not the second version:)
./pishrink.sh .img

(Note on this below example you must have enough disk space for both image files - the larger one is deleted only after the smaller one is created, and the .sh app assumes you need disk space equal to twice the size of the larger original file (which in my case is too big) -

./pishrink.sh .img .img

  1. Required (this will take 45 minute - 1 hour)
    Restore the shrunken image onto a smaller SD card… We need to take the shrunken image and using the Windows Raspberry PI Imager to put it onto a smaller SD card (at least 64GB - a little more than the size of what the space would be needed resoring from (and expanding) the .img file). (You must use a MUCH smaller card than the 1TB SSD because the there may be file corruption causing RaspBerry PI Imager for some reason cannot handle creating 1TB images on 1TB drives very well at all. So, on Windows, run Raspberry PI
    Imager App (on the PC):
    a. “Operating System” - select “Use Custom” and then the .img file created in step 10.
    b. Select “Choose Storage” and select the MUCH SMALLER SD Card than the 1TB SSD’s you are using.
    c. click ‘write’
    d. When done, properly dismount the new backup SD Card from windows
    e. Rinse and repeat for the other cards - always have more than one (do at least three times!)

  2. Required
    Make sure you can boot up on the new small SD Card (do not do anything else with it). This will take a couple
    of reboots - and it is VERY slow coming up the first time. The only need for the reboot is to be verify it is
    useable and to make sure it starts up faster on subsequent retries.

Hope that helps

If you followed the installation guide as described all the way up in this topic, then you’d found out that cmdline.txt on an pi4 is located in /boot/firmware. My /boot/firmware/cmdline.txt looks like this:

console=tty0 console=ttyS1,115200 root=LABEL=RASPIROOT rw fsck.repair=yes net.ifnames=0 rootwait systemd.unified_cgroup_hierarchy=false

3 Likes

It worked!! Yes, you’re absolutely right, I didn’t look at the correct location! With you pointing the correct folder I managed to correct my cmdline.txt and I’m finally running HA-Supervised without any warning! I’ve learned how important it is to double check file location because this tutorial is especially for RPi4. I’m sorry if I’m not as good of a student @Patrick010 as you could expect from someone bumping into this conversation but I’m learning and will remember the lesson.

What a teacher you are @KruseLuds! Your explanations are golden!

I am already running this add-on GitHub - sabeechen/hassio-google-drive-backup: Automatically create and sync Home Assistant backups into Google Drive and will certainly proceed to a full SSD backup this summer. I feel at risk right now because I once tried to restore from a backup after a fresh install and it never went through.

Thanks for sharing this detailed tutorial, I’ll keep this in my contingency plan folder!

1 Like

You’re very welcome @donparlor. I think we’re all in the same boat here, it’s unrealistic to expect people to completely read through lengthy topics such as this one :wink:

You are welcome @donparlor !

@Patrick010 I agree - I actually wrote that for myself but we should all share (and “give back”), it just increases the educational enrichment for all of us! It’s actually much longer with many other steps I tried that did not work but kept all of it for reference (I did not share that part lol!).

To shrink debian .img file on my windows PC for example, which @donparlor you will find in that list of crazy steps, the only way I could find to do it was with a Linux script (pishrink) by changing my PC’s bios to support virtual machines, then install Ubuntu on it:

Keep the above link. This stuff gets crazier and crazier!

I’d be totally lost without all of the resources online -

Thank for the correction @Patrick010 I was only looking in the incorrect directory as mentioned above