Yh for some reason when I initially started my reply then went to reply to yours but realised I was still replying to someone else it changed the target
I don’t think I can edit it. Let me see, who was it meant for?
Yeah sorry, I can’t change the reply. You can probably just delete it and copy/paste with the correct reply
I’ve been using MariaDB without a problem but then was getting the warnings about my MariaDB version from Home Assistant (using Synology)… I then explored other options and decided to give InfluxDB a go, initially I thought it was highly redundant but alas I started to see the benefits.
First benefit is that the DB and entries won’t change, it also doesn’t hold unwanted “junk” like events and so forth which keeps the DB size optimised. You’re also less likely to run the risk of corruption and won’t face issues on migration updates etc.
Secondly, you will have the full history of your sensors and so forth and can manage what to keep and what to remove after a certain period.
From there on, I’ve set MariaDB to purge every 30 days to keep it optimised.
The real issue is that the data isn’t directly accessible from Home Assistant and their cards or history components etc. But I’ve overcome this by using iframes and Grafana.
I’ve just update to 2023.4.1 and noticed that some of my Grafana charts (fed by influxdb) are missing data. In the logs I find (excerpt)
Logger: homeassistant.components.http.security_filter
Source: components/http/security_filter.py:66
Integration: HTTP (documentation, issues)
First occurred: 06:42:11 (18 occurrences)
Last logged: 06:44:14
Filtered a request with unsafe byte query string: /api/hassio_ingress/hg-sVagLsxo9FvO0f-jNSvkCUnzMDATA-ociaxnOUv8/api/datasources/proxy/uid/RV5IoOZgk/query?db=homeassistant&q=SELECT%20spread(%22value%22)%20FROM%20%22state%22%20WHERE%20(%22entity_id%22%20%3D%20%27boiler_burner_starts%27)%20AND%20time%20%3E%3D%20now()%20-%2010d%20and%20time%20%3C%3D%20now()%20GROUP%20BY%20time(1d)%20fill(none)%0A%0A&epoch=ms
Apparently, not all charts are affected. Anyone any idea? Worked flawlessly in the past.
I can reply to myself here. For some reason or the other, this helped: go to each Grafana chart and for each query turn visual editor mode on, save query. Rinse and repeat.
Pretty painless update for me. Thanks to all custom card developers that have updated their cards. And some help here on the forum to update my SQL sensor.
DB migration took very little time (1.6GB repacked to 1.1GB).
This is new and appears after a start up:
Logger: homeassistant.components.utility_meter.sensor
Source: components/utility_meter/sensor.py:438
Integration: Utility Meter (documentation, issues)
First occurred: 15:06:46 (8 occurrences)
Last logged: 15:06:46
Invalid state (unknown > 5910.58)
Invalid state (unknown > 234.0)
Invalid state (unknown > 64.21)
Invalid state (unknown > 132.73)
Invalid state (unknown > 125.44)
Not sure of the usefulness of logging this. That transition is actually good. The bad transition is from 0 -> some total
, that sort of transition causes endless issues for energy dashboard users.
You’ve hit the nail on the head with your response:
Your mum needs everything to be in the UI. She’s not going to be delving into YAML or installing custom components manually or through HACS, she’s not going to be switching the default DB to MariaDB or porting her data off to influxdb etc.
HA gets closer each release to a mum ready product but until it’s ready (not anytime soon) us hobbyist will just need to live with breaking changes.
Although there are things you can do to relieve the pain, like waiting more than a day or two after the release to upgrade.
I’d argue that if you are a hobbyist who enjoys tinkering you should join the beta release testing to make the .0 releases even more reliable.
Hi!
I experienced some issues when doing the update from 2023.3.5 to 2023.4.1. I’m running haOS straight onto an old PC with the MariaDB addon as DB.
I saw the notification “DB migration in progress” but very soon it changed to “DB migration failed” and I noticied that the filesystem had turned into read-only state. I couldn’t even start the phpMyAdmin to have a look. I tried several reboots of the entire machine (power off) and the migration continued to fail. So I started to try to restore from my backup but that also failed, probably because of the read only state. I spent about two hours trying to get the backup to restore with a lot of restarts and then SUDDENLY - everything was fine.
Now the instance works great, and is running on the new release. But how can I be sure that the DB migration was actually successful? I have checked:
- All data appears to be there in the energy dashboard
- History and log panels works great
Any ideas on what else I could check to make sure the migration was ok?
I can however see that the DB-size appears to be still increasing. My daily full backup hasn’t gotten any smaller.
- My oldest one from 2023-03-19 is 850MB.
- The one just before the backup is 1.1 GB taken 2023-04-08.
- The one taken today after the update is still 1.1GB.
Any idea on why the size still keeps increasing?
I’d agree, but it’s easier to come on to the forums/ subreddit and lambast the developers.
that should be fixee in 2023.4.2, now the utility is waiting until HA is started
Here we go again. Third release this month.
What’s wrong with that?
Things are getting fixed.
If you want less things to need to be fixed in the .0 release join the beta program and help test it.
There are a vey limited number of testers and a vast array of possible configurations.
One question regarding has_value
. It works well in templates but not in automations?
I tried to replace this:
condition:
- "{{ trigger.from_state.state not in ['unknown', 'unavailable'] }}"
with this:
condition:
- "{{ has_value(trigger.from_state.state) }}"
but it always return false
Don’t really think I lambasted the developers. Most of what you said doesn’t even seem to refer to my post… I was talking about keeping up with breaking changes, not YAML or HACS or MariaDB
Because you need to pass in the entity_id not its value
EDIT: having seen you’re trying to access the old state, has_value will only check against current state. An alternative (if you feel you’re going to reuse this is to make your own jinja macro).
Create the jinja file in the custom_templates path eg. tools.jinja and do something like
{%- macro has_state_value(state) -%}
{{ state not in ['unknown', 'unavailable'] }}
{%- endmacro -%}
Then you can do
condition:
- "{% from 'tools.jinja' import has_state_value %}{{ has_state_value(trigger.from_state.state) }}"
Although I must admit having to import the macro/function does make it more cumbersome
Being part of the beta makes sure “your” configuration has been tested and you understand what direction the developers are going on the monthly release and what changes you need to do on “your” system to ensure it works the way you want it to.
I find the interaction between them and the beta testers very enjoyable. As to a .0 release having all the fixes found during beta, I find this depends on what broke, how bad it is and how many testers have the issue.
People forget how complex HA is. With the built-in integrations, Add-ons, HACS, HAOS, Debian, etc, it is close to impossible to test all the combinations.
I find it amazing how fast developers are able to respond to the community and keep HA moving forward.