Make sure the firewall is disabled: service firewalld stop
or that you have allowed MySQL access through the firewall on port 3306.
Figured it out, had to bind to 0.0.0.0 in my.cnf
Is there a good program to quickly make plots of the data in the sql database, kind of like HASS does but for longer periods of time?
Try the Influxdb and Grafana combination.
Interesting observations there regards exclude/includes - thanks for that!
However I’m still struggling with the mass of data in my database. I’m using MySQL and am seeing around 1300 records per minute going into the events table! The mysqld service is under high load, hass takes forever to show graphs and if I don’t periodically purge my database manually then mysql fails to allow anything to happen to the events table - it fails to lock records when updating. I then have to manually delete records in small batches until things are ok again.
My setup isn’t that big - 15 light bulbs, 10 humidity/temperature sensors, a few switches, etc.
My recorder config is like this:
exclude:
domains:
- mqtt
- script
- sun
- updater
include:
domains:
- sensor
- binary_sensor
- switch
- light
- input_select
- input_boolean
However I still see rows like this:
{“domain”: “mqtt”, “service”: “publish”, “service_data”: {“topic”: “tinkerforge/bricklet/lcd_20x4/ofQ/write_line/set”, “payload_template”: < lots of code>"}
This call runs quite frequently to update a screen but I don’t need it logged.
Why do I still see mqtt service call records when they are set to be excluded?
And these everywhere:
{“service_call_id”: “85675568-79”}
How do I stop these filling up the log?
{“domain”: “script”, “service”: “refresh_lcdscreen”, “service_data”: {}, “service_call_id”: “85675568-76”}
This too - why is the script domain appearing in my logs with my config?
I just discovered the include/exclude logic has been re-written as of 0.57 and it alters previous behavior in certain circumstances. I’ve extracted the logic to help others understand what it now does. I used the code for 0.57.3. I’m not sure how to get this into the official documentation but it would be helpful to have a page with examples to help people get the right config. The good thing is that this new revision is more intuitive and would have saved me a lot of time in the past…
When the recorder is initialized it builds one of 6 filter functions based on what you have in your config:
-
Case 1 - No includes or excludes configured
Everything passes/written to recorder -
Case 2 - Only includes configured
Nothing is recorded unless the domain or entity are specifically included -
Case 3 - Only excludes configured
Everything is recorded unless the domain or entity are excluded -
Case 4 - At least one include domain and one exclude (domain or entity) configured
Only records if:- Domain included and entity not excluded
- or, Entity in include:entities
Everything in the exclude:domains is ignored.
-
Case 5 - No include:domains, at least one include:entity, at least one exclude:domain configured
Everything is recorded unless:- Domain in exclude:domains and entity not in include:entities
- or, Entity in exclude:entities
-
Case 6 - No include:domains, no exclude:domains, at least one in include:entities, and at least one in exclude:entities
Nothing is recorded unless the entity is in include:entities. Ignores all exclude:entities. This is pretty much a misconfiguration of case 2 and the exclude:entities should be removed.
I feel like the change was an improvement over the previous implementation and all its quirks. However this means the post I made a few weeks ago isn’t correct for HASS 0.57+.
@Bottesford - Assuming you provided your entire recorder
config and you updated to 0.57 then your configuration falls under Case 4 (or Case 4a as called in code). This means your exclude:domains config is ignored but it should still have the same effect. To help further I’d have to know what version you are running. Also make sure you have verified your config and restarted HASS (recorder includes/excludes cannot be refreshed while running). Also make sure the entries you’re finding are new ones since the database won’t clean out those old entries once excluded (until a purge).
running home assistant 0.59, i realized why the db file is so big, i run mysensors component to read and send off energy meter readings to HA, i just browsed the sqlite database and on each row on that table:
“773194” “sensor” “sensor.energy_meter_0_2_2” “404.685” “{”“V_KWH”“: ““349.661"”, ““V_VA””: ““404.685"”, ““device””: ““mqtt””, ““node_id””: 0, ““unit_of_measurement””: ““VA””, ““V_VAR””: “”-219.044"”, ““description””: “”””, ““V_POWER_FACTOR””: "“0.845"”, ““battery_level””: 0, ““V_WATT””: "“340.271"”, ““friendly_name””: ““Energy Meter 0 2"”, ““child_id””: 2}” “2017-12-05 20:03:12.046012” “2017-12-05 20:03:12.046012” “2017-12-05 20:03:12.586282”
why on earth the attribute field is filled every time with same gibrish data of other sensors within the same child_id ?
and why entity_id is not replaced with an index ?
hi,
please allow me this related question:
ive made a little script to try to keep the files under control and purge on demand.
How do i know something is happening, and/or when it is finished…?
purge_recorder:
alias: Purge Recorder
sequence:
- service: recorder.purge
data:
keep_days: 5
thanks,
Marius
You can check in your logbook by going back. You will see No logbook entries found.
if you go more than 5 days back.
well thats just the issue:
clicking the logbook kills my Hassio. Depleting the system of memory. I can’t even get the ssh to respond, leaving me only a hard system reset, as in taking out the power cable out of my Pi…
too many times i guess, hence the need to purge…
If you have too many log entries, logbook will slow the Pi down. You should purge the recorder and then exclude entries from your logbook.
i have this, please have a look if i can get that even lighter:
logbook:
## https://home-assistant.io/components/logbook/
include:
domains:
- script
- sensor
- binary_sensor
- switch
- media_player
- light
entities:
- device_tracker.1,2,3,4,5
exclude:
domains:
- sun
- weblink
- group
- mqtt
entities:
- sensor.last_boot
- sensor.since_last_boot
- sensor.since_last_boot_template
- sensor.date
- sensor.time
- sensor.time_template
- sensor.sun_template
- sensor.workday
history:
include:
domains:
- sensor
- binary_sensor
- switch
- media_player
- light
- script
entities:
- device_tracker.1,2,3,4,5
exclude:
domains:
- automation
- weblink
- updater
- mqtt
entities:
- sun.sun # Don't record sun data
- sensor.last_boot # Comes from 'systemmonitor' sensor platform
- sensor.date
recorder:
db_url: !secret db_url
purge_interval: 2
purge_keep_days: 5
include:
domains:
- sensor
- binary_sensor
- switch
- media_player
- light
- script
entities:
- device_tracker.1,2,3,4,5
exclude:
domains:
- automation
- weblink
- updater
- mqtt
entities:
- sun.sun # Don't record sun data
- sensor.last_boot # Comes from 'systemmonitor' sensor platform
- sensor.date
Hey, Any idea how I can get the new SQL Sensor, to work with the DB residing on the Qnap?
I tried the following without providing the db_url as it is optional.
"db_url
(string)(Optional)The URL which points to your database. See supported engines.
Default value: Defaults to the recorder db_url."
Sensor.yaml =
- platform: sql
queries:
- name: HASS DB size
query: 'SELECT table_schema "database", Round(Sum(data_length + index_length) / 1024, 1) "value" FROM information_schema.tables WHERE table_schema="hass_db" GROUP BY table_schema;'
column: 'value'
But getting the following error:
2018-02-12 15:02:45 ERROR (Thread-22) [homeassistant.components.sensor.sql] Error executing query SELECT table_schema "database", Round(Sum(data_length + index_length) / 1024, 1) "value" FROM information_schema.tables WHERE table_schema="hass_db" GROUP BY table_schema LIMIT 1;: (sqlite3.OperationalError) no such table: information_schema.tables [SQL: 'SELECT table_schema "database", Round(Sum(data_length + index_length) / 1024, 1) "value" FROM information_schema.tables WHERE table_schema="hass_db" GROUP BY table_schema LIMIT 1;'] (Background on this error at: http://sqlalche.me/e/e3q8)
If I use the default config from the sensor docs, I get the same error.
What type of database are you using?
Here is my config:
recorder:
purge_interval: 1
purge_keep_days: 5
db_url: mysql://hass_db:[email protected]:3306/hass_db?charset=utf8
Check information_schema.tables in MySQL and see if there’s anything in there.
I have no idea how to check that.
Configuration of the DB is done via phpMyAdmin with the following interface options:
Looks like the sensor queries a sqlite database.
Have you tried it with the db_url
?
EDIT:
Same problem here.
Works when providing the db_url.
No difference if the db_url is in configuration.yaml or provided in secrets.yaml.
The sensor does not read it.
cc @dgomes
Yes, same when I provide the URL.
It has something to do with the table query but, I have no idea what the query should be.
**no such table:** information_schema.tables [SQL: 'SELECT table_schema "database", Round(Sum(data_length + index_length) / 1024, 1) "value" FROM information_schema.tables WHERE table_schema="hass_db" GROUP BY table_schema LIMIT 1;'] (Background on this error at: http://sqlalche.me/e/e3q8)