Lovelace History Graph Not Updating since 0.91.1 & 0.91.2

I updated to 0.91.1 last night and then to 0.91.2 this morning and cannot get the history graph on my temperature or humidity sensors to update.
I must be doing something stupid that I just can’t see:

sensor Livingroom:
- platform: mqtt
  name: "Temperature"
  state_topic: "tele/sonoff-02/SENSOR"
  value_template: '{{ value_json.SI7021.Temperature }}'
  unit_of_measurement: "°C"
  availability_topic: "tele/sonoff-02/LWT"
  payload_available: "Online"
  payload_not_available: "Offline"
  #refresh: 60
- platform: mqtt
  name: "Humidity"
  state_topic: "tele/sonoff-02/SENSOR"
  value_template: '{{ value_json.SI7021.Humidity }}'
  unit_of_measurement: "%"
  availability_topic: "tele/sonoff-02/LWT"
  payload_available: "Online"
  payload_not_available: "Offline"
  #refresh: 60

I took the refresh option out because Home Assistant complained that is wasn’t supported and would break future updates. Here’s my lovelace:

 - type: history-graph
    title: Environment
    refresh_interval: 60
    entities:
      - entity: sensor.temperature
        name: Temperature
      - entity: sensor.humidity
        name: Humidity

And this is what I get in the Overview:

It doesn’t work in the pop-up window either when you click on the sensor

Are the values up to date if you open the sonoff in a browser?

Yes. Values are current in the Sonoff web page as well as on the Sensor Card. The also match in the States page. But the history graph never changes either from clicking on the Sensor Card or with a history-graph card

It’s working for me on 0.91.2.

Any database errors in your log?

MQTT Log only shows this:

1554792930: Saving in-memory database to /data/mosquitto.db.
1554794731: Saving in-memory database to /data/mosquitto.db.

And Hassio System log only has this:

19-04-09 07:05:45 INFO (MainThread) [hassio.homeassistant] Updated Home Assistant API token

What about your config/home-assistant.log (also shown on the dev tools info page)?

Well, this is special. There are a TON of errors on the Hassio About page:

Error executing query: (sqlite3.DatabaseError) database disk image is malformed [SQL: INSERT INTO events (event_type, event_data, origin, time_fired, created, context_id, context_user_id) VALUES (?, ?, ?, ?, ?, ?, ?)] [parameters: ('state_changed', '{"entity_id": "sensor.xxxxxx", "old_state": {"entity_id": "sensor.xxxxxxx", "state": "107", "attributes": {" ... (1322 characters truncated) ... :00", "last_updated": "2019-04-09T07:36:12.531382+00:00", "context": {"id": "ea43c56f21cd4e2fb29f256e2764f44a", "parent_id": null, "user_id": null}}}', 'LOCAL', '2019-04-09 07:36:12.531555', '2019-04-09 07:36:12.547703', 'ea43c56f21cd4e2fb29f256e2764f44a', None)] (Background on this error at: http://sqlalche.me/e/4xp6)
3:36 PM components/recorder/util.py (ERROR)
Error saving event: <Event state_changed[L]: entity_id=camera.kitchen, old_state=<state camera.kitchen=idle; access_token=46f7d508ebc906d736b50b1298ed7e2240b2ec4d5b746e878b9d408da3a4751b, friendly_name=Kitchen, entity_picture=/api/camera_proxy/camera.kitchen?token=46f7d508ebc906d736b50b1298ed7e2240b2ec4d5b746e878b9d408da3a4751b, supported_features=2 @ 2019-04-09T14:30:54.570688+08:00>, new_state=<state camera.kitchen=idle; access_token=ee4208224c043e4876392cba18fdf5ce880aa9c5e37f445909fa1c4d74202c40, friendly_name=Kitchen, entity_picture=/api/camera_proxy/camera.kitchen?token=ee4208224c043e4876392cba18fdf5ce880aa9c5e37f445909fa1c4d74202c40, supported_features=2 @ 2019-04-09T14:30:54.570688+08:00>>
3:36 PM components/recorder/__init__.py (ERROR)
Error executing query: (sqlite3.DatabaseError) database disk image is malformed [SQL: INSERT INTO events (event_type, event_data, origin, time_fired, created, context_id, context_user_id) VALUES (?, ?, ?, ?, ?, ?, ?)] [parameters: ('state_changed', '{"entity_id": "camera.kitchen", "old_state": {"entity_id": "camera.kitchen", "state": "idle", "attributes": {"access_token": "46f7d508ebc906d736b50b1 ... (805 characters truncated) ... :00", "last_updated": "2019-04-09T07:36:07.006808+00:00", "context": {"id": "ed1caac2905149708bf5c23e7b40f0ef", "parent_id": null, "user_id": null}}}', 'LOCAL', '2019-04-09 07:36:07.006933', '2019-04-09 07:36:07.049865', 'ed1caac2905149708bf5c23e7b40f0ef', None)] (Background on this error at: http://sqlalche.me/e/4xp6)
3:36 PM components/recorder/util.py (ERROR)
Error saving event: <Event state_changed[L]: entity_id=camera.front_door, old_state=<state camera.front_door=idle; access_token=04bd7fb4a69c8ab253deafe4ad8f7f3f54ded99f399f440255f0296e03d8eb09, friendly_name=Front Door, entity_picture=/api/camera_proxy/camera.front_door?token=04bd7fb4a69c8ab253deafe4ad8f7f3f54ded99f399f440255f0296e03d8eb09, supported_features=0 @ 2019-04-09T14:30:54.569802+08:00>, new_state=<state camera.front_door=idle; access_token=2d8d25b89cbfda7d8031b205646d6248dcaba53c0ac4dbdc00b6fbd72f5b20a1, friendly_name=Front Door, entity_picture=/api/camera_proxy/camera.front_door?token=2d8d25b89cbfda7d8031b205646d6248dcaba53c0ac4dbdc00b6fbd72f5b20a1, supported_features=0 @ 2019-04-09T14:30:54.569802+08:00>>
3:36 PM components/recorder/__init__.py (ERROR)
Error executing query: (sqlite3.DatabaseError) database disk image is malformed [SQL: INSERT INTO events (event_type, event_data, origin, time_fired, created, context_id, context_user_id) VALUES (?, ?, ?, ?, ?, ?, ?)] [parameters: ('state_changed', '{"entity_id": "camera.front_door", "old_state": {"entity_id": "camera.front_door", "state": "idle", "attributes": {"access_token": "04bd7fb4a69c8ab25 ... (826 characters truncated) ... :00", "last_updated": "2019-04-09T07:36:07.005361+00:00", "context": {"id": "b928085cef69417ab015e73bb3bcc176", "parent_id": null, "user_id": null}}}', 'LOCAL', '2019-04-09 07:36:07.005505', '2019-04-09 07:36:07.020755', 'b928085cef69417ab015e73bb3bcc176', None)] (Background on this error at: http://sqlalche.me/e/4xp6)
3:36 PM components/recorder/util.py (ERROR)
Error saving event: <Event state_changed[L]: entity_id=sun.sun, old_state=<state sun.sun=above_horizon; next_dawn=2019-04-09T21:08:06+00:00, next_dusk=2019-04-09T10:42:42+00:00, next_midnight=2019-04-09T15:55:43+00:00, next_noon=2019-04-10T03:55:44+00:00, next_rising=2019-04-09T21:32:44+00:00, next_setting=2019-04-09T10:18:06+00:00, elevation=34.02, azimuth=257.22, friendly_name=Sun @ 2019-04-09T14:30:53.044492+08:00>, new_state=<state sun.sun=above_horizon; next_dawn=2019-04-09T21:08:06+00:00, next_dusk=2019-04-09T10:42:42+00:00, next_midnight=2019-04-09T15:55:43+00:00, next_noon=2019-04-10T03:55:44+00:00, next_rising=2019-04-09T21:32:44+00:00, next_setting=2019-04-09T10:18:06+00:00, elevation=33.81, azimuth=257.38, friendly_name=Sun @ 2019-04-09T14:30:53.044492+08:00>>
3:35 PM components/recorder/__init__.py (ERROR)

What happened?!?

Your database got corrupted during one of the upgrades.

Delete the config/ home-assistant_v2.db file and restart. You will lose your history but it will create a new db file after starting up.

Theoretically you are supposed to stop home assistant before deleting the file then restart but I’ve never had an issue just deleting it and restarting. HA cant write to it anyway.

5 Likes

Thank you!
I got no history, as you said, but the history graph end-point values matches the current values in the sensor card.
the Home Assistant log how looks much better. No more DB errors. And the MQTT log is showing normal info: client connects, local users, etc.
…And I’m getting Temperature and Humidity history in the graphs now! Yay!
Thanks again! I would have never thought to look there…I’m a NOOB at this.

Consider installing the MariaDB addon. I’ve had no db issues since switching to this.

It will use more memory resources but is a lot faster and more stable.

1 Like

I got the exact same problem om my hassio 91 and I’m not using Lovelace. Saw this first time yesterday. I’m Using mysql as DB. The temperature is updating and even sending correct values to grafana but the graph is a line. Works a few hours after a reboot then getting a line again…

You should be using InfluxDB for long term trends with Grafana.

Of course, I use mysql for hassio DB and influx for grafana.

I wrote my hassio DB because it may be related to the graph-problems. They fetch the graph data from the DB right? :slight_smile:

Weird… I just let it be and now the graph is back again…

FYI for others in 2020 having this issue, this fix still works.

0.114 introduced this:

The default sqlite database (home-assistant_v2.db) is now validated on startup and if corruption is detected, the database is renamed to home-assistant_v2.db.corrupt.{ISOTIME} and startup proceeds with a fresh database.

So theoretically deleting the database should no longer be required.

Sounds like it’s not working. I went a full month with it broken. Even installed updates, multiple reboots. Had to delete the db file and all is working again.

Heh. Sounds like it works as well as safe mode then. I did include “theoretically” for precisely that reason.