Energy blackout and corrupted internal DB - new onboarding

Hello everyone,
I am quite new to this world and I know that this thread is a little bit long, my apologize.

I started working with Portainer a few months ago and everything was working fine until yesterday, when I experienced a power outage due to a blackout.
In fact, the SQLite database inside HA has been corrupted and the following message appeared in the HA log file:

“The system failed to verify that the sqlite3 database at //config/home-assistant_v2-db was shut down properly”.

The result was an automatic update to the latest version and HA was restarted completely to the onbarding page.

I tried to repair the corrupted database with sqlitebrowser without success, so I cannot try to use Pragma statements.

After that, I initialized the system again, starting from the KNX entities from the HA page, and I need to configure the KNX thermostat via the climate statement declared in configuration.yaml.

With my disappointment, after the reboot I can’t see the thermostat in my dashboard and I don’t understand why.

So I moved on to another point: inside the same HA stack I added the MariaDB container and the adminer. The scope was not to use SQLite inside the project, but other external supported databases. I tried to ping from the HA container to the MariaDB container via their IP with success, but I didn’t see anything in the log files, apart from the administrator’s intentional login error. I successfully accessed the schema.

This is my configuration.yaml

#########

    Loads default set of integrations. Do not remove.
    
    default_config: 
    Load frontend themes from the themes folder

    frontend:
      themes: !include_dir_merge_named themes

    automation: !include automations.yaml
    script: !include scripts.yaml
    scene: !include scenes.yaml

    knx:
      climate:
        - name: "Termostato1"
          temperature_address: "4/0/7"
          target_temperature_state_address: "4/0/5"
          setpoint_shift_address: "4/0/4"
          setpoint_shift_state_address: "4/0/6"
          target_temperature_address: "4/0/59"
          humidity_state_address: "4/1/2"
          setpoint_shift_max: 2
          setpoint_shift_min: -2
    
    recorder:
      purge_keep_days: 30
      db_url: "mysql://homeassistant:[email protected]/ha_database?charset=utf8mb4"

#############

172.25.0.3 is the IP of Mariadb container
172.25.0.2 is the IP of HA container

In previous HA version I had Termostato1 in a specific badge/card, now it’s declared but not seen in dashboard.

Before write here i tried to rename configuration.yaml adding .old extension, the system had recreate basic configuration.yaml

    #### portainer stack editor about HA,mariadb
    services:
        homeassistant:
            container_name: homeassistant
            image: "ghcr.io/home-assistant/home-assistant:stable"
            volumes:
              - /Volume1/DockerAppsData/HomeAssistant/config:/config
              - /etc/localtime:/etc/localtime:ro
            restart: unless-stopped
            privileged: true
            network_mode: host

        db:
            image: mariadb
            restart: always
            environment:
              MYSQL_ROOT_PASSWORD: ***
              MYSQL_DATABASE: ha_database
              MYSQL_USER: homeassistant
              MYSQL_PASSWORD: ***
              PUID: 1000
              PGID: 1000
            volumes:
              - /Volume1/DockerAppsData/mariadb:/etc/mysql/conf.d
            ports:
              - 3306:3306

        adminer:
            image: adminer
            restart: always
            ports:
              - 8180:8080
    #####

Someone could help me to understand how to have newly an HA docker system working fine?

The corruption of the sqlite database would only lead to recorder data to be lost, as in no history on the state of entities. A fresh install and onboarding is a quite unexpected response.

The config you show seems to be bad yaml syntax. What looks like comments miss a #. So if this is not a copy error, the config cannot be parsed, which might lead to HA behave like a fresh install.

The normal response to an event like this would to restore last nights backup. Do you have one?

Thank you Edwin

I could copy another time my configuration.yaml

# Loads default set of integrations. Do not remove.
default_config:

# Load frontend themes from the themes folder
frontend:
  themes: !include_dir_merge_named themes

automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml

recorder:
  db_url: mysql://homeassistant:[email protected]:3306/ha_database?charset=utf8mb4
  purge_keep_days: 30

knx:
  climate:
    - name: "Termostato1"
      temperature_address: "4/0/7"
      target_temperature_state_address: "4/0/5"
      setpoint_shift_address: "4/0/4"
      setpoint_shift_state_address: "4/0/6"
      target_temperature_address: "4/0/59"
      humidity_state_address: "4/1/2"
      setpoint_shift_max: 2
      setpoint_shift_min: -2

Thi afternoon I tried to use the port forwarding mapping so 192.168.10.244 is the private nas IP, without appreciable results.

I haven’t a backup because I forgot to enable it, so I can’t restart by a certain point, I’ve written that I’m newbie and this is the result…

The behaviour of my HA is not regular.

Yesterday I had delete the portainer stack with datas and I’ve created a new stack with mariadb, when the container was started I was expecting to have a new onboard ha start, instead I’ve found the last onboard that I’ve done after the crash.

The configuration is not inside the container. If it was, it would disappear with each update. So spinning up a new container is like reinstalling the software and keeping the config.

First of all I’d make a copy of the entire config folder to save what you have, even if it may contain flaws.

Deleting the entire content of the config folder would equate to a fresh start.

First of all thank you for your tips.

Yesterday I follow your suggestions and I deleted the HA container and the config dir.

I re-pull the HA image and recreate the container, so all has become normal. In the dashboard now I have all knx sensors that I had and I started with scheduled backup.

The last thing I need to investigate is the old corrupted DB, I would know if I could extract the old datas inside it, but this is another story,

1 Like