Database Corrupted

Hello to all,

I would like a bit of help to identify why since the latest update to core: 2024.2.2 i find in my config folder copies of corrupted database as per below screenshot:
image

In the screenshot are only 2 because i deleted them but before deletion there were about 10 corrupted files which makes me believe that this happens regularly.

I checked the core logs and i can find the following:

Logger: homeassistant.components.recorder.core
Source: components/recorder/core.py:926
Integration: Recorder (documentation, issues)
First occurred: 08:57:18 (1 occurrences)
Last logged: 08:57:18

Unrecoverable sqlite3 database corruption detected: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (sqlite3.IntegrityError) FOREIGN KEY constraint failed [SQL: INSERT INTO states (entity_id, state, attributes, event_id, last_changed, last_changed_ts, last_updated, last_updated_ts, old_state_id, attributes_id, context_id, context_user_id, context_parent_id, origin_idx, context_id_bin, context_user_id_bin, context_parent_id_bin, metadata_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) RETURNING state_id] [parameters: (None, 'off', None, None, None, None, None, 1708415836.354324, 12444, 1, None, None, None, 0, <memory at 0x7f07c4034580>, None, None, 18)] (Background on this error at: https://sqlalche.me/e/20/gkpj)
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2113, in _exec_insertmany_context
    dialect.do_execute(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
    cursor.execute(statement, parameters)
sqlite3.IntegrityError: FOREIGN KEY constraint failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/recorder/core.py", line 926, in _process_one_task_or_event_or_recover
    return task.run(self)
           ^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/recorder/tasks.py", line 112, in run
    if purge.purge_old_data(
       ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/recorder/util.py", line 643, in wrapper
    return job(instance, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/recorder/purge.py", line 88, in purge_old_data
    has_more_to_purge |= _purge_states_and_attributes_ids(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/recorder/purge.py", line 195, in _purge_states_and_attributes_ids
    state_ids, attributes_ids = _select_state_attributes_ids_to_purge(
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/homeassistant/homeassistant/components/recorder/purge.py", line 253, in _select_state_attributes_ids_to_purge
    for state_id, attributes_id in session.execute(
                                   ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2308, in execute
    return self._execute_internal(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2169, in _execute_internal
    ) = compile_state_cls.orm_pre_session_exec(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 551, in orm_pre_session_exec
    session._autoflush()
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 3021, in _autoflush
    raise e.with_traceback(sys.exc_info()[2])
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 3010, in _autoflush
    self.flush()
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4312, in flush
    self._flush(objects)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4447, in _flush
    with util.safe_reraise():
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4408, in _flush
    flush_context.execute()
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 463, in execute
    n.execute_aggregate(self, set_)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 759, in execute_aggregate
    persistence.save_obj(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj
    _emit_insert_statements(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 1137, in _emit_insert_statements
    result = connection.execute(
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
    return meth(
           ^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
    return connection._execute_clauseelement(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
    ret = self._execute_context(
          ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1843, in _execute_context
    return self._exec_insertmany_context(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2121, in _exec_insertmany_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
    raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2113, in _exec_insertmany_context
    dialect.do_execute(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)
(sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL: INSERT INTO states (entity_id, state, attributes, event_id, last_changed, last_changed_ts, last_updated, last_updated_ts, old_state_id, attributes_id, context_id, context_user_id, context_parent_id, origin_idx, context_id_bin, context_user_id_bin, context_parent_id_bin, metadata_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) RETURNING state_id]
[parameters: (None, 'off', None, None, None, None, None, 1708415836.354324, 12444, 1, None, None, None, 0, <memory at 0x7f07c4034580>, None, None, 18)]
(Background on this error at: https://sqlalche.me/e/20/gkpj)

To provide more information, everytime HA starts, i have an automation that purges the database keeping 0 days and attempts to repack.

I don’t remember having this issue with the previous versions of Core.
Could you please suggest a fix?

Thank you very much
M

Well I’d stop doing that for a start. Disable the automation and see if it helps.

There really is no need to do that by the way. The database purges automatically every night. And repacks on Sundays.

Are you really that short of disk space?

Hi Tom,

Thanks for your comment. I am using VirtualBox VM installation and it has by default a 30GB disk space. I was reading a few threads here on how to extend it but it would require to start a new VM and apply the backup which mentally and psycologically i am not ready to do yet.

Having said that, i have a feeling that when the occupied space is increasing, the stability of the system decreases. Now this is not nessesaraly the core’s fault, as i have a lot of 3rd party HACS integrations that might be causing instabilities. I noticed about a year ago that when the occupied space increases, random restarts are happening.

Long story short, i implemented this automation that kept things in control. I was hoping someone with more knowledge on reading the logs to pin-point me in a direction of what is causing this so i can attempt a fix.

For now, i will disable the automation and check back if the purging is the only reason causing this.

Thanks

If that is your concern, maybe an idea to use a database on your host?
I use MySQL for that purpose, as I had it running on the host for other purposes anyway :stuck_out_tongue:

You mean like MariaDB?

Also possible…

The supported database solutions are:

Ok, i will try to avoid first the purge and if that doesn’t help i will try to install a database.
I think this started to happen after the use of the default sqlite update so it would make sense.

At that point i installed also other addons from github and that’s why i am not sure what is actually causing the issue… For now i have pin-pointed it to the purge…

Hi again,

On second thought a question occurred to me… How can i avoid the space from cluttering up if i install MariaDB for example. Since MariaDB is an addon, it installs in HA, therefore, the .db file should be somewhere in the filesystem taking up space. How is this then different from the SQLite it is currently using?

Or, can it be installed externally? Do you have maybe any links to a tutorial?

Thanks

That is what I said:

I use the mysql server from my host computer and run HA under vmware.

It’s all in the link i posted earlier

f.e.

mysql://user:password@SERVER_IP/DB_NAME?charset=utf8mb4

Just for information, thinking that this is somehow my issue, i powered up a new virtual machine in which i only installed duckdns and ngix and tried the purging, the issue appears as well there.

It is not related to HACS or any other integration/addon. There is something wrong with the latest update.

There is something changed in a recent update of Home Assistant that does cause problems. It started to get corrupted several times a day for me. I don’t know what it is, but it may be a similar mechanism.

It should not harm the referential integrity if you delete the history on special events.

Hope it gets corrected soon.

Yes, i also believe that it was caused by the update to 2024.2.2.

My idea is that it could be related to recorder.purge_entities which I call on light switch off for that light (shall not be monitored) in the automation when the button for turning off was pressed. Some call afterwards(some hours later) misses a key which makes the database get marked as corrupt. Just a shot in the dark when reading the message.

There is an INSERT attempted with entity_id null and event_id null - guess the call is corrupt, not the database. Seems not to make sense to insert state “off” (in my case it is “on” but looks very similar) with all these null values.

Well, if for your purpose you don’t need to monitor a specific entity, you can exclude the entity from the recorder and you don’t need to purge. Here is an example:

recorder:
  exclude:
    entities:
      - sensor.living_room_light_sensor

Replace the entity with yours and it will not record anything…

Hope this helps

Thank you for your hint. I am aware of this possibility and already used it, but sometimes there are malfunctions to debug which I can do with my solution as long as I don’t turn the light off.

Hello,

In regards to your above comment, i took your advice and installed MariaDB but unfortunately the errors re-appear in the logs. I switched back to the default HA database again. At least with this, i can delete the corrupted files.

Thank you for your advice though!