For your feature request, please also put this in the issue tracker on the github repo. Unfortunately I have no previous experience with HACS and dont use it myself. I you feel up for the challenge I am happy to accept a pull request for this.
Yes, I also think the error message is related to the current changes how translations are handled now.
Regarding HACS support, I just thought it would help ltss to be known to a wider audience and make it easier to track features and fixes.
I don’t know much about the conditions myself, but it seems you’ll have to restructure your repo a bit for that:
I’ll see to it that I create a track for you for both things on GitHub. If you don’t mind, maybe I would really read in and send you a pr for HACS support
I am using it on 64-bit now and it works. Had it running on 32-bit but the postgis extension was missing there.
However, is there any difference in using “ltts” vs “recorder with postgres directly”?
I’ve migrated from SQLite to timescaledb and thus, my home-assistant is pushing states into postgres already.
Now, as I’ve activated LTSS, I wonder whats different as I can configure entities to store aswell as in the recorder component.
Dont understand me wrong, there might be a usage for this for someone but I’ve just not see it right now. Would love to get feedback from others who are using this addon.
Maybe it was because the recorder component was not optimized until some of the latest HA versions which implemented “FROM”/“TO” limits for the Logbook or History Graph?
@boesing
The main differences between LTSS and the regular recorder component with Postgres as backend would be the following:
The recorder component uses “vanilla” Postgres whereas LTSS makes use of two extensions, TimescaleDB and PostGIS. TimescaleDB brings advanced timeseries handling/analytics functionality to Postgres and PostGIS enables GIS-powered data types and queries.
The recorder component stores both HA states and HA events whereas LTSS stores only HA states, reducing the size of the database significantly.
The recorder component stores state attributes as JSON-encoded text strings whereas LTSS makes use of the Postgres-specific JSONB type (a binary JSON representation) which increase performance and reduce database size further.
In general, the recorder component is well integrated with HA and should be your goto-choice if you only want to see your sensor readings from the last couple of weeks/months using Logbook/History Graph. LTSS is tuned for efficiently storing only state data (sensor readings) for long time periods (i.e. no purging of data after a couple of weeks/months) with better query performance and more advanced analytics functionality included.
Personally I run both the recorder component (using Postgres as a backend) with a short retention time (1 week) AND LTSS to make sure I keep my sensor readings “forever” in a space-efficient manner.
According to best practice, a chunk should not be larger than 25% of the available RAM. Since I run TimescaleDB on a Pi4 2GB, I have allocated it a maximum of 512MB. Of that 25%, 128MB would be the maximum a chunk can have. But since I generate 150MB per day (>300MB for sure in the future), I set the interval to 12h ( SELECT set_chunk_time_interval(‘ltss’, 432000000); ).
However, this only affects the next chunk. The current one with 30 days is still running. The only way to shrink an existing chunk is, in my opinion, to export the database, recreate the database (with chunk interval set to 12h) and import the data again.
But how can I tell LTTS to create the database with 12h interval?
Update:
For me locally I have adapted the __init__.py.
Line 311:
from “chunk_time_interval => interval ‘1 month’,”
to “chunk_time_interval => interval ‘12 hours’,”.
Previously disabled ltss in configuration.yaml, then restart HA, then deleted and recreated database, then re-enabled ltss (after code change), then restart HA.
Worked great (also the previous backup via csv and afterwards import). But maybe it’s an idea to set the interval as a configurable value? I think 30 days might be too much for many.
I must say I am a bit surprised that you manage to generate some 150-300MB of sensor data daily, that must amount to a fair amount of sensors in your HA setup? In comparison, my setup generates ~300MB per month, hence I have had no problem with the default 1 month chunk interval.
Stupid question maybe but I see that ltss component creates a database table called ‘ltss’ in the database you specify. Does this means you can use the same database as the recorder database?
For example homeassistant with this TimeScaleDB add-on config?
Longer answer: Yes, provided that the database has the extensions required by LTSS installed (timescaledb and postgis). I am assuming you are talking about this add-on (Home Assistant Add-on: PostgreSQL + TimescaleDB) in which case both the timescaledb and postgis extensions are provided in the package and hence should work out of the box.
ltss
(map)(Required)
Enables the recorder integration. Only allowed once.
db_url
(string)(Required)
The URL that points to your database.
exclude
(map)(Optional)
Configure which integrations should be excluded from recordings.
domains
(list)(Optional)
The list of domains to be excluded from recordings.
entities
(list)(Optional)
The list of entity ids to be excluded from recordings.
include
(map)(Optional)
Configure which integrations should be included in recordings. If set, all other entities will not be recorded.
domains
(list)(Optional)
The list of domains to be included in the recordings.
entities
(list)(Optional)
The list of entity ids to be included in the recordings.
Ok it is working now
I now have a table called sensors, with the pgAdmin4 i can view the data.
To display this in grafana i made a successful connection.
To display my temperature/humidty sensor i made query ( with the help from grafana)
SELECT
$__timeGroupAlias("time",$__interval),
state AS metric,
max(id) AS "id"
FROM ltss
WHERE
$__timeFilter("time") AND
entity_id = 'sensor.thgn122_123_thgn132_thgr122_228_238_268_58_01_temperature'
GROUP BY 1,2
ORDER BY 1,2
But i do not get the result what i want, is see the the values but not in time but with the id counter
You don’t need to use any “id” in your Grafana query, but you have to cast your state values to a number data format!
By default they are stored as Varchar for compatibility reasons.
So your query should look more like this:
SELECT
$__timeGroupAlias("time",$__interval,previous)
, 'YourTemperatureSensor'
, MAX(state::FLOAT) AS value
FROM ltss
WHERE
$__timeFilter("time")
AND entity_id = 'sensor.thgn122_123_thgn132_thgr122_228_238_268_58_01_temperature'
GROUP BY 1,2
ORDER BY 1,2
I followed your recommendations for designing a query and could make graphs in Grafana. Thank you for your advice!
However I am now confronted with the problem that a graph is not made in case of unavailable data points. My problem can be made clear by the following example:
In the next query made in PGAdmin the datapoint nr. 1513 is unavailable:
When I make a graph of the last 4 days in Grafana (Query options, Relative time: 4d) the following message is shown: pq: invalid input syntax for type double precision: unavailable. When I choose for the last 3 days the graph is made successfully.
How can I correct this?
Sure
If we take the last query as an example just modify it like this:
SELECT
$__timeGroupAlias("time",$__interval,previous)
, 'YourTemperatureSensor'
, MAX(state::FLOAT) AS value
FROM ltss
WHERE
$__timeFilter("time")
AND entity_id = 'sensor.thgn122_123_thgn132_thgr122_228_238_268_58_01_temperature'
AND state NOT IN ('', 'unavailable')
GROUP BY 1,2
ORDER BY 1,2