Solved. Details can be found here:
Every time I reboot HA, I get this
main, [03.05.2025 суббота 15:41]
An HA task (possibly unrelated to Alert2) died to due to an unhandled exception: <class ‘asyncio.exceptions.InvalidStateError’>: invalid state. full context: {‘message’: ‘Fatal error: protocol.data_received() call failed.’, ‘exception’: InvalidStateError(‘invalid state’), ‘transport’: <_SelectorSocketTransport fd=44 read=polling write=<idle, bufsize=0>>, ‘protocol’: <pymodbus.transaction.transaction.TransactionManager object at 0x7f2d7bc1e0d0>}main, [03.05.2025 суббота 15:41]
undeclared event hass-event for domain=alert2 name=global_exception. Creating event alertmain, [03.05.2025 суббота 15:41]
An HA task (possibly unrelated to Alert2) died to due to an unhandled exception: <class ‘asyncio.exceptions.InvalidStateError’>: invalid state. full context: {‘message’: ‘Fatal error: protocol.data_received() call failed.’, ‘exception’: InvalidStateError(‘invalid state’), ‘transport’: <_SelectorSocketTransport fd=52 read=polling write=<idle, bufsize=0>>, ‘protocol’: <pymodbus.transaction.transaction.TransactionManager object at 0x7f2d7b384b00>}main, [03.05.2025 суббота 15:41]
undeclared event hass-event for domain=alert2 name=error. Creating event alertmain, [03.05.2025 суббота 15:41]
An HA task (possibly unrelated to Alert2) died to due to an unhandled exception: <class ‘asyncio.exceptions.InvalidStateError’>: invalid state. full context: {‘message’: ‘Fatal error: protocol.data_received() call failed.’, ‘exception’: InvalidStateError(‘invalid state’), ‘transport’: <_SelectorSocketTransport fd=49 read=polling write=<idle, bufsize=0>>, ‘protocol’: <pymodbus.transaction.transaction.TransactionManager object at 0x7f2d7b3848a0>}
how can I fix this?
Hi,Alert2 installs a global exception handler to alert when other components in HA crash due to an unhandled exception. So in your case, it looks like a HA component using pymodbus is having a problem. A few options:
- If you know what component that is, try updating it.
- You could set the Alert2
global_exception
to not notify if you don’t want notifications of this sort of thing (setnotifier
tonull
). - You could set Alert2
skip_internal_errors
to true to ignore all internal errors.
Separately, there looks also to be a bug in Alert2 where the global_exception alert is itself undeclared. I’ll look into it. Are you reloading Alert2 via the UI?
I may change Alert2 to log a stack trace in these cases to help diagnose.
-Josh
Ok, so it’s my problem with the modbus component and it’s not Alert2’s fault ?
You mean there’s a way to reboot from here? I haven’t figured out how to do that yet
And another question is there any functionality to collect all the alarms in one message and then if one alarm is reset to send a message on it. Or if several alarms are cleared, that it also comes in one message ?
The unhandled exception error is not Alert2’s fault. The error about undeclared alerts is Alert2’s fault.
Re reboot: I was referring to “Developer Tools” → " YAML configuration reloading" → “Alert2”. Clicking on that link will cause Alert2 to reload, but it does not reboot HA.
I think there’s a way to reboot HA from the UI, but I’m not sure where it is. How are you rebooting HA?
-J
When I try “Developer Tools” → " YAML configuration reloading" → “Alert2”. nothing happens
I got messages only when I reboot all HA from “Developer Tools” not only YAML configuration
Is it possible to get the alarm start date? If the alarm is long, I would like to see in repeated messages how much time has passed while the alarm is active.
and as I understand alert2 has no possibility to change the repeat message ? to write different text like alarm is still going on. regular alert has repeat_message.
Hi - I’m working on a change to enable you to customize the repeat message, but at present you are correct - you can’t customize it.
The alert start date is available in the attribute last_on_time
.
Oh and thanks for the info on how you’re restarting. I’ll look into that.
-J
And right i understood the card type: custom:alert2-overview can not be separated and it will display all possible alerts that are in the yaml config ? I have a lot of these alerts and i would like to display them by rooms
To shorten the code, I came up with this scheme Alert2 only serves to generate alarm events. And sending notifications is done by automation, it waits for
alert2_alert_on
alert2_alert_off
events and then based on which object triggered these events
domain: “{{ trigger.event.data.domain }}”
name: “{{ trigger.event.data.name }}”
eid: “{{ domain }}.{{ name }}”
I generate a message to telegram where I write the name and room where the alarm occurred and in the header I write that it is the beginning of the alarm or the end
name: “{{ state_attr(eid, ‘friendly_name’) }}”
area: “{{ area_name(eid) }}”
as a result of this scheme the code is greatly reduced, in Alert2 there is no need to write messages, for the same type of equipment they are the same. The only problem now is that I don’t know which event is responsible for repeated notifications. Is there such an event at all ?
And I haven’t figured out if it’s possible to call a script in the notifier? Then it would simplify things and automation would not be necessary, you could run a script that would generate messages, although it might not work because I probably don’t have access to trigger.event.data.name (I’m not very good at HA yet, I don’t understand a lot of things).
Hi All,
I’m happy to release v1.11.2 of Alert2 and the UI.
Changes
- Add
reminder_message
config field to override the default reminder notification text (takes a template) - Add
supersede_debounce_secs
config field. An alert’s notifications will be suppressed if a superseding alert fires within this many seconds. Defaults to 0.5 seconds. Purpose is to avoid extra notifications when an alert and a superseding alert both turn on or off at almost the same time. Does not affect alerts that do not usesupersedes
. - Fix bug where, during HA shutdown, if a task has an unhandled exception, extra errors were generated because the Alert2 internal alerts had been unloaded. Now, once HA starts shutdown, Alert2 will log errors it encounters, but it will not try to send notifications.
@mill7 : The Alert2 UI Overview card has some config parameters that let you give the card a title and filter by entity id (filter_entity_id
). That should allow you to create different cards for different rooms.
Also, check out the new reminder_message
field and see if that works for you.
I’d recommend sending alert notifications from within Alert2 rather than through an automation. There is no event for reminders, snooze ending or throttling. If you want, you can trigger a script as a notifier. Check out NotiScript.
-Josh
When I have this config
input_boolean:
manual_sensor:
name: "Ручной датчик"
initial: true
alert2:
defaults:
annotate_messages: false
alerts:
- domain: binary_sensor
annotate_messages: false
name: satel_zal_okno1
condition: "{{ is_state('input_boolean.manual_sensor', 'on') }}"
title: "{{ state_attr('input_boolean.manual_sensor', 'friendly_name') }}"
message: message
reminder_frequency_mins: 1
reminder_message: reminder_message
done_message: done_message
notifier: telegram
and when I get reminder_message its look like
Alert2 binary_sensor_satel_zal_okno1: reminder_message
why ? I thought annotate_messages: false was supposed to disable Alert2 binary_sensor_satel_zal_okno1:
Also I have a question about last_on_time
it works in general for one alert2 configuration ?
And if I have several sensors in the same configuration loaded according to a template, I have no way to know the alarm start for a particular sensor ?
alert2:
alerts:
- generator_name: satel_window_alerts
generator: >-
{{ (
states.binary_sensor
| selectattr('entity_id', 'match', 'binary_sensor\.satel_.*okno.*')
| map(attribute='entity_id')
| list
)
}}
Your spec for reminder_messages
requests the string literal “reminder_message”, which is what you got (plus the alert name). Did you mean to use a template, like:
reminder_message: "still on: {{ state(....) }}"
annotate_messages
does not apply to reminder_message
, though I could certainly make it apply.
Regarding which alert is firing from a generator, typically people use genElem
or genRaw
or something in the alert spec.
-Josh
That would be great, that’s what’s missing for perfection, then all the notifications would look just right
and if I understand correctly alert2 has a reminder_message:
done_message: and just message: but message: refers only to the start message, doesn’t it ? It would be better to have a name like start_message: ?
Hi All,
I just released v1.11.3
Changes
reminder_message
now obeysannotate_messages
reminder_message
templates now have access to two variables:on_secs
(float) is number of seconds the alert has been on.on_time_str
is length of time alert has been on as a string.
Default message forreminder_message
is
on for {{ on_time_str }}
- Added boolean attribute
is_acked
to alert entities to simplify determining if alert has been acked or not.
-J
Great job, my notifications are almost perfect now. I replaced in some cases literally thousands of lines of code from automations with less than 100 lines when I switched to Alert2 and it became clearer and more convenient.
What about collecting all the notifications that come with a reminder message and sending them in a single message? If for example I have two windows open, it would be convenient to receive reminder messages simply listing which windows are still open. I think it should be done within one generator: because there will usually be one title.
@mill7 - glad Alert2 is working well for you.
Ganging together notifications from related alerts is an interesting idea. It’d require some designing. Some design questions:
- Is it just for reminders, or would it be useful for other notification types?
- Is it mostly useful for alerts from a single generator, or would it be useful across other alerts (eg superseded alerts) ?
- How do you specify the “gang” an alert belongs to, and where do you specify how the ganged notifications behave (e.g., reminder frequency)?
-J
Just a thought.
I think that combined alerts should be combined with one header to make it clear what it refers to.
The common title is the main thing otherwise it will not be clear.
And if you do it for the first message and the last, then you need a delay during which is waiting for other alarms, I think it will be useful if for example sensors are in the same room and usually triggered together, such as exceeding some temperatures or opening all the windows in the room.
And for reminder_message delay is not needed just collect all current alarms that have the same header and send them one message, for example by the time of the first triggered alarm, that is the first alarm has reminder_frequency_mins 30min when they expire for the first alarm then goes reminder of all active alarms, well or vice versa by the time of the last triggered alarm requiring reminder.
And yes there is also an option not to send the first message for alarms from one group with one header, when there is already an active alarm and from it was already the first message, and immediately send reminder_message and include new alarms.
I have a task to do an alert when the target sensor (battery charge of my ibp) will reach 40% 30% and 20%
I have prioritized it with supersedes.
alert2:
defaults:
notifier: notiscript
reminder_frequency_mins: 1
annotate_messages: false
alerts:
- domain: ups_battery_low
name: threshold_20
condition_on: "{{ states('sensor.protsent_zariada_ab_ibp') | float < 20 }}"
condition_off: "{{ states('sensor.protsent_zariada_ab_ibp') | float >= 20 }}"
title: "⚠️ Тревога! Заряд ИБП Штиль STR1101LD"
message: "Заряд ИБП ниже 20%"
done_message: "Заряд ИБП восстановлен выше 20%"
reminder_message: >-
{%- from 'time_delta.jinja' import format_timedelta -%}
{%- set alert_entity = 'alert2.ups_battery_low_threshold_20' -%}
Заряд ИБП ниже 20% {{ format_timedelta(state_attr(alert_entity, 'last_on_time')) }}
supersedes:
- domain: ups_battery_low
name: threshold_30
- domain: ups_battery_low
name: threshold_40
- domain: ups_battery_low
name: threshold_30
condition_on: "{{ states('sensor.protsent_zariada_ab_ibp') | float < 30 }}"
condition_off: "{{ states('sensor.protsent_zariada_ab_ibp') | float >= 30 }}"
title: "⚠️ Тревога! Заряд ИБП Штиль STR1101LD"
message: "Заряд ИБП ниже 30%"
done_message: "Заряд ИБП восстановлен выше 30%"
reminder_message: >-
{%- from 'time_delta.jinja' import format_timedelta -%}
{%- set alert_entity = 'alert2.ups_battery_low_threshold_30' -%}
Заряд ИБП ниже 30% {{ format_timedelta(state_attr(alert_entity, 'last_on_time')) }}
supersedes:
- domain: ups_battery_low
name: threshold_40
- domain: ups_battery_low
name: threshold_40
condition_on: "{{ states('sensor.protsent_zariada_ab_ibp') | float < 40 }}"
condition_off: "{{ states('sensor.protsent_zariada_ab_ibp') | float >= 40 }}"
title: >-
{%- if is_state(genEntityId, 'on') -%}
⚠️ Тревога {{ genRaw }} % активна ({{ genEntityId }})
{%- else -%}
✅ {{ genRaw }} % в норме
{%- endif -%}
message: "Заряд ИБП ниже 40%"
done_message: "Заряд ИБП восстановлен выше 40%"
reminder_message: >-
{%- from 'time_delta.jinja' import format_timedelta -%}
{%- set alert_entity = 'alert2.ups_battery_low_threshold_40' -%}
Заряд ИБП ниже 40% {{ format_timedelta(state_attr(alert_entity, 'last_on_time')) }}
but the code is very cumbersome, am I solving my problem correctly ?
If it is correct why not just put a sequential number for each threshold, the lower it is the higher the priority ?
priority:1 priority:2 priority:3
instead of
supersedes:
- domain: ups_battery_low
name: threshold_40
I also think it is better to solve this problem in a loop so that for each condition_on: or condition_off: to take threshold values from the file {%- from ‘thresholds.jinja’ import thresholds -%} and then pass it comparing it with the current value, but since I can not put a priority for each value, it will be complex conditions, although the code will be a little shorter than my first option
Maybe there is another solution that I don’t know ?
Hi,
A few suggestions. First is to use a generator. Second is to use condition
rather than condition_on
and condition_off
. And lastly, supersedes
is transitive. So I might write the core part of what you’re describing as:
alert2:
...
alerts:
- domain: ups_battery_low
name: "threshold_{{ genElem }}"
generator_name: g1
generator: [ 40, 30, 20 ]
supersedes: "{{ genPrevDomainName }}"
condition: "{{ states('sensor.protsent_zariada_ab_ibp') | float < genElem }}"
The above will create three alerts, firing at lower battery charge levels, with supersedes
. Generator Patterns has more info on how to use generators, including with supersedes
.
-J