ESP8266 into existing alarm DSC System

This was bug in the status handling. The status should now update immediately. I’ve pushed a fix to the “dev” branch.

Thank you for the quick feedback!

I updated the yaml configuration for the external component from main to dev.

There is a noticeable change in behavior: I now see the arming partition state instead of the pending state I had before. However, when I disarm during the exit delay, the partition state gets stuck in arming. Previously, there was a long timeout for pending, but it eventually reverted to disarmed.

Do I need to change/ update something else as well?

No, this was incorrect logic handling again. I’ve re-written the code to better handle partition status changes. Should be ok now. Updated on “dev”

You’re amazing, Alain! It works perfectly now!

Thank you for all your hard work on this project and for your continuous, speedy, and highly effective support! (I just bought you some well deserved coffee).

I have one question now that you’ve changed the title of “exit delay status” from “arming” to “exit_delay”: when is “arming” used?

Arming technically will no longer be used as when arming, the system goes right into the exit delay status with no in between statuses sent. I feel showing exit delay is more descriptive as that’s what’s happening during that time.

Thanks for the coffees!!! Much appreciated!

1 Like

Yesterday, I didn’t notice this issue during testing, but the partition status becomes unavailable for 0.2-0.4 seconds when arming or disarming, transitioning between states. For example, when arming home (not tested with arm away), it goes from “ready” to “exit_delay” immediately without any unavailability in between. However, from “exit_delay,” it becomes unavailable for 0.2 seconds before transitioning to “ready.”

Similarly, when disarming, the status becomes unavailable for 0.2 seconds between “armed_home” and “ready.”

This unavailability is causing problems for automation triggers that depend on state changes between “ready” and “armed_home” and vice versa.

Is this something you can reproduce at your side as well?

Yes, I had seen that. It happens since the logic had a dead zone for set statuses when it switched from one status to another. I’ve made a few changes to remove some obsolete status fields and added logic to prevent the unavailable status from showing unnecessarily. Pushed to “dev”. Works ok here.

Many thanks again for the quick turnaround!

However, something went wrong this time. I believe the issue isn’t related to your commit removing the redundant status field, but rather to the other two commits concerning the delayed force refresh and refresh check.

Currently, the status of my virtual zones is incorrect after the initial startup. They don’t display their real states until I flip them 1 or 2 times (e.g., opening and closing a door with a door sensor configured as a virtual zone to get the virtual zones populated).

Additionally, the partition status is ‘unavailable’ and remains stuck in that state.

What do you mean the partition status is staying stuck? I don’t see that behaviour. As to the refresh, the changes I did should not impact the virtual zones but I’ll check.

Edit: I’ve reverted back the refresh update change on startup to see if it fixes your issue. I can’t see how it would cause the problem you are having but let’s see.

I hope it’s not my clumsiness, but the situation has worsened.

In fact, I was mistaken in saying the partition status was stuck as unavailable before the revert. The partition status sensor became completely unavailable to HA; after the initial startup, it showed as unknown, and eventually, I guess after the reverted forced refresh, the sensor disappeared.
image

Now, after the revert, the partition sensor no longer disappears, but it remains in the “unknown” state:
image

I hope this does make some sense to you.

Ok, this doesnt make any sense to me. Can you post esphome logs of arming and disarming your system. Just remove any " writing keys" entries from the log. I need to see if it’s the esphome code at fault or ha is having issues. FYI, status “unavailable” means the partition is not ready and you have zones open. Unknown , means that HA did not get any info from the esphome instance yet.

Edit, just to clarify here. You will see unknown until HA receives the current value of the partition status. If it says “unavailable” it means, you have zones open (panel not ready). If’ it’s blank it means that esphome has not got a status yet from the panel. I need to confirm that your panel is not ready and showing unavailable which is ok or it is ready , but still showing unavailable (incorrect).

Edit. The virtual zone issue is unrelated to the changes I made so most likely something else I suspect but we can deal with this later

What I tried to show on the first screenshot is that the ‘Partition 1 Status’ sensor became greyed out, meaning it has got completely unavailable to HA. So it is not the state what is ‘unavailable’, but the sensor itself has got unavailable/ disappeared from HA after the initial unknown state.

Edit: the virtual zones actually get populated with the correct state after about 10-12 minutes of the initial startup. If I flip something during that period, then they get populated quicker.

I’ll have to check later. I’m on my way out of town and will check later this evening.

Ok, I checked. I think the “unavailable” and greyed out is confusing. That is normal when the sensor is showing “unavailable”. It means you have zones open. You can test this by bypassing all open zones by sending cmd “*199#” . The partition should then become “ready”. I can change it to show “not_ready” instead and see if that helps.
Anyhow, I’ve pushed an update that re-adds the refresh to get the current status on startup and also changed status “unavailable” to “not_ready”

Thank you very much, Alain!

I have updated to the latest version, and now the states of the virtual zones are immediately available. I noticed that one of my virtual zones was stuck open (even though it was actually closed). I had to manually flip it to get it back in sync. Since then, this version seems to be working correctly. I still need to test arming and disarming, though.

I’m not sure which of the two commits fixed this issue (reverting the force refresh or adding back the refresh at startup in your last commit, or both together), but I’m available to test if you still want that force refresh change back in.

There still seems to be an issue with updating and obtaining the correct state of virtual zones during startup.

When I open a window or door associated with a virtual zone and then reboot the ESP, the correct open state isn’t reflected after startup.

However, if I open or close another door or window connected via a virtual zone, all virtual zone states immediately update to their correct statuses.

Things like that are normal with the DSC. The panel does not immediately send it’s zone statuses out to the bus. It will do this after about every 5 minutes or so and as you noticed, forcing any zone change will also get the panel to send a zone update. I will investigate if this can be polled from the ESP. It’s been a while since I looked into this.

FYI, another way to force an update is to send cmd *8<installercode>##

Many thanks! Would it be a good practice then to create an automation in HA to send this command on each HA startup and also when the ESP module of the dsc keybus interface starts? That latter probably could be triggered by looking at dscalarm event ‘ESP module start’.

Hi Alain,

It looks like that sometimes (not in every case, but every second or third) I still get “not ready” status between “arm_away” and “ready” for very short period (0.2-0.4sec):
image

Shall I provide you some more details or is this something you can reproduce on your side?

I will look at it again when I return from my holidays.

I can’t duplicate the issue on my systems Please post some esphome logs showing the problem during the disarm process