Hello,
Does anybody have a generic downsampling job for ltss ?
I see a simple one exist for influxDB (InfluxDB 2 data retention - my approach), can we have the same approach for timescaleDB.
Thanks
Hello,
Does anybody have a generic downsampling job for ltss ?
I see a simple one exist for influxDB (InfluxDB 2 data retention - my approach), can we have the same approach for timescaleDB.
Thanks
Hey, my timescaledb addon for HA works for armv7, aarch64 and amd64.
Could try that one. You could start it as a regular docker image outside of HA.
Sure!
This creates materialized downsampled views.
The underlying data can then be auto-removed from the ltss hypertable when itās older then, say 6 months?
SELECT add_retention_policy('ltss', INTERVAL '6 months');
Hi, iām using container dekiesel/timescaledb-postgis:1.7.4-pg12 for aprox 2 years now.
Iād like to upgrade to timescale 2.8 + postgresql 14, using this guideline:
https://docs.timescale.com/timescaledb/latest/how-to-guides/upgrades/upgrade-pg/
This guideline tells me to first upgrade to timescale 2.8, and after that upgrade to postgresql 14.
However, i get stuck in the very first step: upgrading to timescale 2.8
ALTER EXTENSION timescaledb UPDATE;
It tells me that i cannot update, since the original docker container uses timescaledb 2.1.0-dev. Upgrading from dev is not supported.
Is there any migration path I can follow?
@freol Iām looking into compression and noticed the same thing as @joostvh , with the primary key on the ltss
table being id, time
compression can only be segmented by id
and doesnāt add anything in terms of storage savings. When I alter the primary key to entity_id, time
and segment by entity_id
the savings are significant.
Whatās your idea on the current primary key? And can it be altered without consequences?
Hi, i would like to run two instances of ltss, each of them pointing to a different timescaledb instance. Reason: for some sensors i would like to differentiate in retention period.
Is this possible?
Sorry, missed your message until now. I have since managed to aqcuire a couple of Pi4s, so I think Iāll migrate the installed to HassOS in the near future, hopefully without giving up my data in the process. Iāll probably end up using your addon then.
You surely donāt need to give up your data Simply take a backup of the addon, and restore it on your new pi.
Coming back to this way later than I had meant to, but looking at your addon, it does not actually include Toolkit, does it?
I can run TimescaleDB perfectly fine in my current setup, the component that was missing was TimescaleDB Toolkit, which provides the functions I seemingly need, like time weighted average.
The officially suggested Timescale-HA container image that includes Toolkit does not have ARM support.
Trying to enable this extension in your addon results in an error, suggesting toolkit is not available:
ERROR: could not open extension control file "/usr/share/postgresql14/extension/timescaledb_toolkit.control": No such file or directory SQL state: 58P01
That is correct.
TimescaleDB toolkit indeed provides some extra statistical functions that users would like to see.
I am in the process of integrating TimescaleDB toolkit into the addon, and the master-branch already contains the basics for this.
It is however quite an undertaking, because it (as you found out) needs to compile on multiple platforms. (I do accept pull requests ), and I pack a lot of extraās.
Work on this has been stalled for a little while because I was abroad for my work, but I expect to finish it soon.
What is your specific usecase for your weighted avarage (just curious) ?
Hmm alright, Iāll keep an eye on that then. In the meantime I guess Iāll see if I can get it to build myself, the instructions on the Toolkit page seem deceptively straightforward, though of course they donāt mention Aarch64ā¦
As for my use case, really Iām just trying to visualize my sensor data accurately, and Iām at a loss how else to do it.
I can visualize my raw data correctly, but this is way too much data for both client and server, so Iām trying to use continuous aggregates to create lighter time averages, like 5 minute averages.
But because the data from my sensors is only recorded on state change, and states often donāt change for hours, something goes wrong with the averages and the returned data is incorrect.
The screenshot in my original post explains it better than I can in text.
Basically though, it seems to boil down to Timescale extending the value of the last 5 minute average with a time_bucket_gapfill
when the state stops changing, rather than extending the last datapoint, which results in wildly incorrect data.
In theory, using time weighted averages should hopefully fix this, and if it doesnāt I donāt know what will.
So tl;dr: Without time weighted averages, my data in TimescaleDB is useless because itās incorrect (averaged) or too heavy to use at the scale I want it to use (raw).
Ah! So!
As far as I know, the gapfilling in Timescale work by plugging in the fucntion you want to use to gapfill with.
See this doc: Timescale Documentation | Time bucket gapfill
Basically it states: āThe time bucket gapfill function creates a contiguous set of time buckets but does not fill the rows with data. You can create data for the new rows using another function, such as last observation carried forward (LOCF), or interpolation.ā
So, if you just want to extend the last datapoint, can you not use the āLast Obervation Carried Forwardā method?
Timescale Docs
Unfortunately, no dice. I tried recreating the same graph as in my original post on the same timeframe, and tried using the last()
function with LOCF this time:
SELECT
time_bucket_gapfill('5m', "time") AS "time",
locf(
last(state_numeric, time)
) AS "Phase 2 last"
FROM ltss
WHERE
$__timeFilter("time")
AND entity_id = 'sensor.power_consumed_phase_2'
GROUP BY time_bucket_gapfill('5m', "time")
ORDER BY 1
The result is a lot of data getting lost entirely:
Breaking my head over this one for a while, but indeed you are totally right.
The uneven sample-rate skews the simple averages.
You definitely need the time-weighted averages for this one to be accurate.
Failing to come up with a better query from my part, I resumed work on updating the addon to support the timescaledb-toolkit.
Thanks to your analysis, I think I need this one too for my own graphs
Iāll do my best in the scares free time I have!
Hello guys,
After upgrading to core 2023.3.1, Iām having this error with LTSS:
2023-03-04 11:21:43.885 ERROR (MainThread) [homeassistant.setup] Setup failed for custom integration ltss: Requirements for ltss not found: ['sqlalchemy>=1.0,<2.0'].
In the log, I also have this:
`Logger: homeassistant.util.package
Source: util/package.py:107
First occurred: 21:33:51 (9 occurrences)
Last logged: 21:36:12
Unable to install package sqlalchemy>=1.0,<2.0: ERROR: Cannot install sqlalchemy<2.0 and >=1.0 because these package versions have conflicting dependencies. ERROR: ResolutionImpossible: for help visit Dependency Resolution - pip documentation v23.1.dev0 [notice] A new release of pip is available: 23.0 ā 23.0.1 [notice] To update, run: pip install --upgrade pip`
Anybody else is having the same issue?
Thank you!
Luis
[UPDATED] : Just found a way to solve this, here is what I did, just in case someone else falls into something similar:
I have the same issue, but after uncommenting, I get the error backā¦
I struggled with this as well, but managed to solve it.
You may need to keep ltss
disabled/commented out in your config until youāre done with all this.
First, make sure your ltss is fully up to date. You need at least v2.0.1
.
Then, you need to update ltssā Python dependencies.
If youāre not installing ltss through HACS and havenāt got any new HASS updates to install, even if you update, or even if you remove and reinstall ltss or any custom component, the Python dependencies donāt automatically get updated by Home Assistant.
Assuming youāre running HassIO/HassOS, and not Core, you can rename (or remove, if youāre feeling brave) the config/deps/
folder and reboot. Home Assistant will then automatically pull in the new deps, and afterwards ltss should work again.
If youāre running core or otherwise managing your Python deps manually, youāll have to update them through whatever methods you normally use to update your deps, be it pip
or something else.
I have a configuration question asI struggle with correctly combining include and exclude statements.
In detail, I would like to have temperature entities for different rooms included
e.g.: sensor.air_quality_monitor_bedroom_temperature
but corresponding devide temperatures to be excluded
e.g.: sensor.air_quality_monitor_bedroom_device_temperature
So far I have the following statements in my configuration:
include:
entity_globs:
- sensor.air_quality_*_temperature
exclude:
entity_globs:
- sensor.*_device_temperature
However, that does not seem to do the trick, as both example entities are included in my database.
Are there any news regarding the Toolkit-integration?
The readme in the Repo already lists Toolkit as installed (https://github.com/Expaso/hassos-addon-timescaledb/blob/b29fcc6ec5d8457b5880db50eb5126290e90388f/README.md?plain=1#L2) but none of the Toolkit functions seem to work.
Issuing CREATE EXTENSION timescaledb_toolkit
also does not work:
ERROR: extension "timescaledb_toolkit" is not available
Detail: Could not open extension control file "/usr/share/postgresql15/extension/timescaledb_toolkit.control": No such file or directory.
Do you need any help here, @Expaso, e.g. with this: Home Assistant Add-on: PostgreSQL + TimescaleDB - #164 by Expaso?
Also I just have to say that I really love having TimescaleDB running in HA. Tried InfluxDB before and was not really happy with it.
With currently 12.710.640 entries in the ltss table it is long overdue to say thank you for your great work so far!
hi @tldp !
Thank you so much!
If I could get some help, it would be nice! My Family and work take so much of my time that there is not much leftā¦
The github lists the toolkit because the code to support it is there.
The only problem I have, is that the build just simply crashes with a SIGTERM and a coredump once I try to compile the toolkitā¦
I have tried precompiled binaries (even borrowed from the official timesale container), but they rely on another C implementation (MUSL vs GLIBC issue) and simply crash.
Soā¦ Iām a little bit stuck here. I have spend many hours trying to make it workā¦ but it simply wonāt compile for an alpine docker container.
Anybody with a docker, alpine and C-compiling experience is more then welcome to chip inā¦