Database is locked

I’m running the latest version of Home Assistant installed with the All-in-One installer on a PI 1 model B and my config can be found on github.

Nothing to fancy but something seems to be messing things up. Everything was alright but then I added 2 more Z-Wave wallplugs and then things got wierd and slow. Inputs are on the wrong state and everytime try to look at the Logbook the entire application freezes and I have to reboot the PI.

The entire logfile can be viewed in pastebin. A small recap

These errors were there all time, so even before the current problems started

  • homeassistant.components.sensor: Error while setting up platform zwave
  • homeassistant.components.zwave: zwave not ready after 30 seconds, continuing anyway

These problems came with the new devices

  • homeassistant.core: WorkerPool:All 16 threads are busy and 49 jobs pending
  • homeassistant.components.recorder: Ended unfinished session
  • homeassistant.components.recorder: Found unfinished sessions
  • homeassistant.components.recorder: Error during query: (sqlite3.OperationalError) database is locked

I tried removing the pyozw.sqlite file and let HASS create a new one but that didn’t fix it. The file is not locked either.
Could it be because of the PI 1, that it is simply to slow to process all the data?

1 Like

Seems to be fixed with update 29.6

Running under 30 but still have this problem.

I removed the database and had HASS create a new one but after a few days it is back again.
It happens after I restart HASS and after 5 minutes it is gone again.

Looks like it is checking the DB or so:

Any suggestions?

I’m seeing the following constantly on version 31.1

homeassistant.components.recorder: Error during query: (sqlite3.OperationalError) database is locked

I fixed it by switching to a MySQL database.

1 Like

Post a bug report here: https://github.com/home-assistant/home-assistant/issues/new

I have been getting this for the last few days. I have tried restarting HA and rebooting the Pi. Any thoughts?

17-01-05 03:23:21 aiohttp.server: Error handling request
Traceback (most recent call last):
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/default.py", line 469, in do_execute
    cursor.execute(statement, parameters)
sqlite3.OperationalError: database is locked

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/hass/.homeassistant/deps/aiohttp/web_server.py", line 61, in handle_request
    resp = yield from self._handler(request)
  File "/home/hass/.homeassistant/deps/aiohttp/web.py", line 249, in _handle
    resp = yield from handler(request)
  File "/usr/lib/python3.4/asyncio/coroutines.py", line 143, in coro
    res = yield from res
  File "/usr/lib/python3.4/asyncio/coroutines.py", line 143, in coro
    res = yield from res
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/http/__init__.py", line 427, in handle
    result = yield from result
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/history.py", line 240, in get
    self.filters)
  File "/usr/lib/python3.4/asyncio/futures.py", line 388, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib/python3.4/asyncio/tasks.py", line 286, in _wakeup
    value = future.result()
  File "/usr/lib/python3.4/asyncio/futures.py", line 277, in result
    raise self._exception
  File "/usr/lib/python3.4/concurrent/futures/thread.py", line 54, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/history.py", line 87, in get_significant_states
    return states_to_json(states, start_time, entity_id, filters)
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/history.py", line 157, in states_to_json
    for state in get_states(start_time, entity_ids, filters=filters):
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/history.py", line 112, in get_states
    run = recorder.run_information(utc_point_in_time)
  File "/srv/hass/lib/python3.4/site-packages/homeassistant/components/recorder/__init__.py", line 95, in run_information
    (recorder_runs.end > point_in_time)).first()
  File "/home/hass/.homeassistant/deps/sqlalchemy/orm/query.py", line 2697, in first
    ret = list(self[0:1])
  File "/home/hass/.homeassistant/deps/sqlalchemy/orm/query.py", line 2489, in __getitem__
    return list(res)
  File "/home/hass/.homeassistant/deps/sqlalchemy/orm/query.py", line 2797, in __iter__
    return self._execute_and_instances(context)
  File "/home/hass/.homeassistant/deps/sqlalchemy/orm/query.py", line 2820, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 945, in execute
    return meth(self, multiparams, params)
  File "/home/hass/.homeassistant/deps/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 1189, in _execute_context
    context)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
    exc_info
  File "/home/hass/.homeassistant/deps/sqlalchemy/util/compat.py", line 202, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/home/hass/.homeassistant/deps/sqlalchemy/util/compat.py", line 185, in reraise
    raise value.with_traceback(tb)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/home/hass/.homeassistant/deps/sqlalchemy/engine/default.py", line 469, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database is locked [SQL: 'SELECT recorder_runs.run_id AS recorder_runs_run_id, recorder_runs.start AS recorder_runs_start, recorder_runs."end" AS recorder_runs_end, recorder_runs.closed_incorrect AS recorder_runs_closed_incorrect, recorder_runs.created AS recorder_runs_created \nFROM recorder_runs \nWHERE recorder_runs.start < ? AND recorder_runs."end" > ?\n LIMIT ? OFFSET ?'] [parameters: ('2017-01-04 09:23:14.942502', '2017-01-04 09:23:14.942502', 1, 0)]

I am getting something similar running 0.35.2.
I am wondering whether I am pushing my system too hard - I have a Raspberry Pi 3 just running HASS in virtual environment - MQTT Broker is on a different Pi. But I have quite a bit of MQTT traffic - 8 sensing units each transmitting temperature, humidity, baro pressure, light, signal strength & battery values every 10 seconds. Also 2 video cameras, Neurio interface and a couple of other minor items.

ERROR:homeassistant.components.recorder:Error during query: (sqlite3.OperationalError) database is locked [SQL:     'INSERT INTO states (domain, entity_id, state, attributes, event_id, last_changed, last_updated, created) VALUES (?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('sensor', 'sensor.total_usage', '1015', '{"friendly_name": "total_usage", "icon": "mdi:flash", "unit_of_measurement": "W"}', None, '2017-01-05 11:46:32.827273', '2017-01-05 11:46:32.827273', '2017-01-05 12:12:12.984943')]
ERROR:homeassistant.components.recorder:Error during query: (sqlite3.OperationalError) database is locked [SQL: 'INSERT INTO events (event_type, event_data, origin, time_fired, created) VALUES (?, ?, ?, ?, ?)'] [parameters: ('state_changed', '{"entity_id": "sensor.utc", "new_state": {"entity_id": "sensor.utc", "last_updated": "2017-01-05T11:47:00.508956+00:00", "attributes": {"friendly_nam ... (179 characters truncated) ... 00.526070+00:00", "attributes": {"friendly_name": "UTC", "icon": "mdi:clock"}, "last_changed": "2017-01-05T11:46:00.526070+00:00", "state": "11:46"}}', 'LOCAL', '2017-01-05 11:47:00.509144', '2017-01-05 12:12:18.115967')]
ERROR:homeassistant.components.recorder:Error during query: (sqlite3.OperationalError) database is locked [SQL: 'INSERT INTO events (event_type, event_data, origin, time_fired, created) VALUES (?, ?, ?, ?, ?)'] [parameters: ('state_changed', '{"entity_id": "sensor.utc", "new_state": {"entity_id": "sensor.utc", "last_updated": "2017-01-05T11:47:00.508956+00:00", "attributes": {"friendly_nam ... (179 characters truncated) ... 00.526070+00:00", "attributes": {"friendly_name": "UTC", "icon": "mdi:clock"}, "last_changed": "2017-01-05T11:46:00.526070+00:00", "state": "11:46"}}', 'LOCAL', '2017-01-05 11:47:00.509144', '2017-01-05 12:12:24.813589')]

The only way I could fix it by moving the database to my NAS using MariaDB… after that my problems disappeared and the performance of HASS improved dramatically

i agree i am using mysql ad no error left

I too am having this issue. I added 5 new switches and everything went kaput. (I think that’s why)

I’ve seen here and on another thread to change the db to mysql but I don’t know what I’m doing with that.

Any tips?

I would stop HA, then rename the old db file (home-assistant_v2.db) and then start HA back up. The default grows to a really big file.

I had tried that as well with the same errors.

I switched to Postgresql because I found this walkthrough: Howto: Postgresql on Raspberry Pi All-In-One Installer

Error went away.

Thanks though

Thats great to know. I’ll keep this in mind if it comes back. The only other thing I did was to set the log to purge after 3 days by adding this to my configuration.yaml.

recorder:
  purge_days: 3

I also installed mySQL, thanks for the tutorial… My database was also very big (more then 1.5 GB)
Now I see a big improvement in speed.
Thnx sjorsjes :slight_smile:

1 Like

Hi @ronvl , would you please explain a bit more on your setup? I think I need to do the same.

In my case, I am running my pi 3 with real overloading ( running hass in venv, hadashboard and hapush, mqtt server, VPN server ). Aside to those, I have number of automations and specially several segment of the code need to run every few seconds. And I am yet to see some real lag from the pi except this database locking things… I guess I can be a little generous on my pi by moving the database to a different machine/NAS.

I have a Synology NAS , if you have the same, I can blindly follow your process…

Thanks

Hi @tarikul

My recorder setup below… I use MariaDB on the NAS and you need to create an empty DB via phpMyAdmin. Hass will create your tables.
If you don’t need all the historic info have a look at the include/exclude options as that has reduced my DB to less than 50% :slight_smile: as I don’t need a log entry of every second when the sensor.time is changing :slight_smile: … that could also help on your Raspberry although I believe that the Locked DB is part of raspberry starting up and getting ready while HASS is already starting…

recorder:
  purge_days: 10
  db_url: mysql://user:[email protected]/hass_db?charset=utf8
  include:
    domains:
      - sensor
      - switch
      - light
  exclude:
    entities:
      - sensor.date
      - sensor.time

Hi @ronvl ,

Thanks a lot for the suggestion and the configuration. However, last 2 days I tried with several failed attempts. The best I got was no error in the log but no table was created in my database hosted in Synology NAS either. I suspect the config issues lies on the Synology setup; may be it couldn’t be queried outside the box. Though from synology terminal I could access the database using the command

"mysql -u homeassistant -p"

However, as suggested by some user,

mysql -u homeassistant@ip-of-nas -p

doesn’t work. So, I guess some sort of adjustment is required in my NAS during creation of the user/database.

Finally I ended up with reducing the db size by excluding several entities from the recorder settings.

Best Regards