So what if i have:
[homeassistant.components.recorder.util] converting 60995 rows to native objects took 32.895932s
Anything i need to clean somewhere?
So what if i have:
I have extreme long conversions…
1 row 90+ seconds, the number of rows doesn’t impact the speed that much.
Any tips anyone?
MariaDB installed as MySQL
Same problem here:
(SyncWorker_19) [homeassistant.components.recorder.util] converting 2766 rows to native objects took 3.744829s
(SyncWorker_19) [homeassistant.components.history] get_significant_states took 3.779498s
(SyncWorker_19) [homeassistant.components.recorder.util] converting 1 rows to native objects took 0.069478s
(SyncWorker_19) [homeassistant.components.history] getting 1 first datapoints took 0.170421s
(MainThread) [homeassistant.components.history] Extracted 2767 states in 4.091232s
MariaDB installed as MySQL
@OverloadUT in your last post you indicate that the “converting N rows to native object” line is all native Python execution. This line is really critical in my case What can I do?
Is there any way to fix extremely slow/unresponsive History/Logger?
DB is around 100MB, free memory on system: 3GB.
My browser is totally KOed.
2018-11-13 20:11:06 DEBUG (SyncWorker_5) [homeassistant.components.recorder.util] converting 10626 rows to native objects took 1.263541s 2018-11-13 20:11:06 DEBUG (SyncWorker_5) [homeassistant.components.history] get_significant_states took 1.276190s 2018-11-13 20:11:06 DEBUG (SyncWorker_5) [homeassistant.components.recorder.util] converting 103 rows to native objects took 0.014387s 2018-11-13 20:11:06 DEBUG (SyncWorker_5) [homeassistant.components.history] getting 96 first datapoints took 0.022660s 2018-11-13 20:11:06 DEBUG (MainThread) [homeassistant.components.history] Extracted 10561 states in 1.338952s
From your log entries there, it looks like the backend is performing just fine, taking only a couple seconds to prepare all of the data for the frontend.
If it’s your browser that is choking on that view, which it most definitely does with tons of entities, then we’re talking about a totally different kind of issue that unfortunately I am very ill-equipped to help with.
My assumption is that we need someone with serious frontend chops to refactor the entire way those pages work to dynamically load the content that is necessary as you scroll. This is how webapps like Netdata can have gobs of graphs all on one page and remain extremely performant.
Exactly, as you described.
I recorded the problem on video (server is free, my browser is kaputt):
Should I open a bug report for this?
I see the same
So do I. History has been borked for me since 0.80 I think…
My History hasn’t worked for a while (a few months?) and I’ve been watching the community for possible reasons. I’ve tried excluding some domains and un-needed sensors. I’ve set my database to purge every day, only keeping a couple days worth of data. Nothing seemed to help. Well, this morning I decided to just not bother using the History functionality, so I commented-out my history:, and recorder: entries from the configuration.yaml. To my surprise, after I rebooted, History was still showing up on the sidebar. And, even more surprisingly, it worked! However, the data has appeared to reset.
I’m running HASS.IO on an Intel NUC (obtained when an image of that was still available), on version 0.82.1, and have MariaDB add-on enabled. And I use Chrome browser on my desktop.
Does History load by default for HASS.IO? I don’t understand why it’s there, and working, when I removed it from my configuration. I do still have History_graph enabled, so perhaps that forces history and recorder to load?
Of course, I don’t know if it will keep working, or if it will perhaps have problems again as more data is collected. I guess I’ll just watch it over the next few days.
Perhaps someone can make sense of this scenario. And hopefully it helps someone else who is having trouble with History loading in their browser.
history_graph almost certainly marks history, which itself marks recorder as dependencies which will cause them all to be loaded. And because you’re not configuring recorder yourself, it’ll default to using the SQLite database and not touch MariaDB at all, which is why your data was reset.
I see! Thank you for clarifying. I wasn’t aware that one component could call on another component as a dependent (makes sense, of course).
I’ll let it run as is to see if the problem returns when more data is accumulated.
also here same issues. Running HA with MariaDB (both dockered) on my RPi3B+. Instead of breaking my browser though the history tab breaks my RPi. I need to unplug it because it becomes unresponsive and I cannot even SSH into my device after looking at the history tab. Don’t try to see three days of history!
Actually the whole history tab feels like an unfinished and not well designed product that should be disabled by default. It’s currently a completely unworkable page that tries to put all the states of all your sensors, switches, lights and other entities for one day on one single page in one go? Why?
It’s fine and great that we can see history per entity/device but this… And then still… In Domoticz (competing? platform) I could easily record history for a complete year for devices… Without a seperate (proper) db like mssql/mariadb that would be impossible for HA it seems.
Don’t get me wrong, I recently moved from Domoticz to HA because it’s sooo much better. The community is great and support for devices is amazing. I really love it but regarding keeping history HA could still learn something from Domoticz. The lack of a proper history for at least some of my sensors like energy meters is my greatest concern for HA.
I tried setting up influxdb as well but my average load on my pi was (before I moved to mariadb) between 4 and 5, and now with mariadb it is still passing the 1 at times. So I either need to move to stronger hardware or setup influxdb and mariadb on a separate pi?
I’m late to the party but did you PR these query optimisations to HA on Github?
If you mean the original query performance fixes I made, then yes. It’s linked at the very top of the first post in this thread: https://github.com/home-assistant/home-assistant/pull/8748
I am very impressed, you are a db expert. Thanks for contributing!
So what if you’re running on the latest version and still have horrible performance?
I’m on a fairly powerful machine, i5 with 8GB RAM and HASS in a docker connected to MySQL on the same machine (not dockered). Everything else on the machine is flying, but history graphs on HASS are terribly slow. Have to wait quite some seconds (like 15-25) for it to load. The above indexes are already applied in the recent versions since it has been merged for a while now right?
You’ll have to wait for someone that wants to take on overhauling the history page to dynamically load content as you scroll, or something similar.
I’ve fixed the database engine problems but that was just to fix the query bottleneck. Now the bottleneck is elsewhere and harder to fix unfortunately.
I made the same move and I have the same concern regarding history graphs. I was using Domoticz too and I suspect it to store values with different granularity like a kinda “round robin database”.
Domoticz stores not only isolated values but also calculate each time the average value for each day, which makes it so fast to display graphs on a long period.
I had to deal with Domoticz database to remove invalid values and I have to recalculate daily average values as well in order to fix the graphs.
stll think its pretty big, but it does sit around that value i think. Just annoying with hassio backups, consume a lot (well its still tiny, but he)
Mine was hovering around 4.5 - 5 GB for only 7 days worth of data. I’ve since trimmed out pointless sensors from the recorder and have halved that, likely trim a few more when I get a chance.