So Iâve been messing with this today and Iâve figured out that, really, this is an InfluxDB issue. InfluxDB is essentially a time-series database, and when itâs queried (either manually or with something like Grafana), it returns only the measurements within the requested time query. This is the way Grafana works: When you want to view the last 6 hours, Grafana queries the necessary values for that time period. So, this could possibly be fixed in Grafana by maybe getting the last result before the requested time period. It could also possibly be fixed in InfluxDB by returning the last value as the âstartâ value for the requested time period.
ANYWAYS. I wrote a Python script modeled after the influxdb.py component and simplified (it may be missing some shit that I donât need right now, but it works great for my application so far).
Because Iâm using a virtualenv, I had to make a shell script that activates the venv, runs the script, and then deactivates.
I made a shell_command service with this script, and then I made an automation that runs that shell_command service every 5 minutes.
Hereâs what I got
DISCLAIMER: Run this at your own risk, I am in no way responsible for database corruption or data loss or anything like that. It has worked for me so far, for about 30 minutes.
import homeassistant.remote as ha
import datetime, re, math
from influxdb import InfluxDBClient, exceptions
from homeassistant.helpers import state as state_helper
PASSWORD = 'your password' # if necessary
STATES = ['sensor.hallway_thermostat_temperature','sensor.pws_temp_f','climate.hallway','device_tracker.phone','sensor.hallway_thermostat_hvac_state'] # state names you want to update
TIME = datetime.datetime.utcnow()
RE_DIGIT_TAIL = re.compile(r'^[^\.]*\d+\.?\d+[^\.]*$')
RE_DECIMAL = re.compile(r'[^\d.]+')
api = ha.API('127.0.0.1', PASSWORD) # host ip may need to be changed
entities = ha.get_states(api)
entities_to_push = [ent for ent in entities if ent.entity_id in STATES]
def event_to_json(state):
try:
include_state = include_value = False
state_as_value = float(state.state)
include_value = True
except ValueError:
try:
state_as_value = float(state_helper.state_as_number(state))
include_state = include_value = True
except ValueError:
include_state = True
include_uom = True
measurement = state.attributes.get('unit_of_measurement')
if measurement in (None, ''):
measurement = state.entity_id
else:
include_uom = False
json = {
'measurement': measurement,
'tags': {
'domain': state.domain,
'entity_id': state.object_id,
},
'time': TIME,
'fields': {}
}
if include_state:
json['fields']['state'] = state.state
if include_value:
json['fields']['value'] = state_as_value
for key, value in state.attributes.items():
if key != 'unit_of_measurement' or include_uom:
if key in json['fields']:
key = key + "_"
try:
json['fields'][key] = float(value)
except (ValueError, TypeError):
new_key = "{}_str".format(key)
new_value = str(value)
json['fields'][new_key] = new_value
if RE_DIGIT_TAIL.match(new_value):
json['fields'][key] = float(RE_DECIMAL.sub('', new_value))
try:
if not math.isfinite(json['fields'][key]):
del json['fields'][key]
except (KeyError, TypeError):
pass
return json
ifdb = InfluxDBClient('127.0.0.1',8086,'root','root','home_assistant') # these values may need to be changed
ifdb.write_points([event_to_json(e) for e in entities_to_push])
You can see I specified only a few states to update, as updating all states every 5 minutes would probably make the database balloon relatively quickly. Iâm going to watch my database size and possibly set a retention policy. I also got the idea of having a short term database that retains 5 min data for a week or so, and a long term database that retains daily and state-change data for a longer time period, maybe a year or even forever. All that would require is essentially creating a new database in InfluxDB and then changing the database name in the script and in Grafana.