ESP8266 into existing alarm DSC System

This post has been a great resource to me as I was able to get the WT32-Eth01 board up and running using the PCB layout on Dilbert66’s page. Did anyone happen to create a printed case to house the setup that can be mounted outside the panel. My setup is connected down the line past a wireless receiver module that was closer to a network drop.

So far everything looks good except I’m stuck on one part and can’t find documentation on it.

My zones 1 through 6 are hardwired with wireless motion sensor on zone 7.
Problem starts where I have wireless fire alarm on 8 and wirless flood sensor on 9.
There doesn’t seem to be a device class for “fire” under binary sensor and in the partition fire alarm section it only specifies partition not zone (8) as I have it programmed on the panel.
Lastly, I don’t see an option for flood sensor.

I get the feeling I just missed something simple.

Any guidance would be greatly appreciated.
Thank You.

Use “smoke” device class for fire and “moisture” for flood. Those only for display anyhow but are close enough. The partition fire section is a separate indicator that the panel sends at partition level. Not the same as a zone indicator.

I’ve merged an update from fjpmeijers who has generously provided a python script to allow getting the installer code from a DSC panel using the mqtt protocol. See the README and associated files in the extras\crack_installer_code folder of the dev branch:

Edit: I’ve also added a couple of additional scripts that can use the ESPHOME API or the web_keypad REST API and mg_lib components from my library for unlocking instead of mqtt. See the readme for details.

Works!
Thank You.

Another toy for those interested…

1 Like

I am wondering if it is the expected behavior for the partition status to remain pending for approximately the duration of the exit delay when I disarm during this period?

This was bug in the status handling. The status should now update immediately. I’ve pushed a fix to the “dev” branch.

Thank you for the quick feedback!

I updated the yaml configuration for the external component from main to dev.

There is a noticeable change in behavior: I now see the arming partition state instead of the pending state I had before. However, when I disarm during the exit delay, the partition state gets stuck in arming. Previously, there was a long timeout for pending, but it eventually reverted to disarmed.

Do I need to change/ update something else as well?

No, this was incorrect logic handling again. I’ve re-written the code to better handle partition status changes. Should be ok now. Updated on “dev”

You’re amazing, Alain! It works perfectly now!

Thank you for all your hard work on this project and for your continuous, speedy, and highly effective support! (I just bought you some well deserved coffee).

I have one question now that you’ve changed the title of “exit delay status” from “arming” to “exit_delay”: when is “arming” used?

Arming technically will no longer be used as when arming, the system goes right into the exit delay status with no in between statuses sent. I feel showing exit delay is more descriptive as that’s what’s happening during that time.

Thanks for the coffees!!! Much appreciated!

1 Like

Yesterday, I didn’t notice this issue during testing, but the partition status becomes unavailable for 0.2-0.4 seconds when arming or disarming, transitioning between states. For example, when arming home (not tested with arm away), it goes from “ready” to “exit_delay” immediately without any unavailability in between. However, from “exit_delay,” it becomes unavailable for 0.2 seconds before transitioning to “ready.”

Similarly, when disarming, the status becomes unavailable for 0.2 seconds between “armed_home” and “ready.”

This unavailability is causing problems for automation triggers that depend on state changes between “ready” and “armed_home” and vice versa.

Is this something you can reproduce at your side as well?

Yes, I had seen that. It happens since the logic had a dead zone for set statuses when it switched from one status to another. I’ve made a few changes to remove some obsolete status fields and added logic to prevent the unavailable status from showing unnecessarily. Pushed to “dev”. Works ok here.

Many thanks again for the quick turnaround!

However, something went wrong this time. I believe the issue isn’t related to your commit removing the redundant status field, but rather to the other two commits concerning the delayed force refresh and refresh check.

Currently, the status of my virtual zones is incorrect after the initial startup. They don’t display their real states until I flip them 1 or 2 times (e.g., opening and closing a door with a door sensor configured as a virtual zone to get the virtual zones populated).

Additionally, the partition status is ‘unavailable’ and remains stuck in that state.

What do you mean the partition status is staying stuck? I don’t see that behaviour. As to the refresh, the changes I did should not impact the virtual zones but I’ll check.

Edit: I’ve reverted back the refresh update change on startup to see if it fixes your issue. I can’t see how it would cause the problem you are having but let’s see.

I hope it’s not my clumsiness, but the situation has worsened.

In fact, I was mistaken in saying the partition status was stuck as unavailable before the revert. The partition status sensor became completely unavailable to HA; after the initial startup, it showed as unknown, and eventually, I guess after the reverted forced refresh, the sensor disappeared.
image

Now, after the revert, the partition sensor no longer disappears, but it remains in the “unknown” state:
image

I hope this does make some sense to you.

Ok, this doesnt make any sense to me. Can you post esphome logs of arming and disarming your system. Just remove any " writing keys" entries from the log. I need to see if it’s the esphome code at fault or ha is having issues. FYI, status “unavailable” means the partition is not ready and you have zones open. Unknown , means that HA did not get any info from the esphome instance yet.

Edit, just to clarify here. You will see unknown until HA receives the current value of the partition status. If it says “unavailable” it means, you have zones open (panel not ready). If’ it’s blank it means that esphome has not got a status yet from the panel. I need to confirm that your panel is not ready and showing unavailable which is ok or it is ready , but still showing unavailable (incorrect).

Edit. The virtual zone issue is unrelated to the changes I made so most likely something else I suspect but we can deal with this later

What I tried to show on the first screenshot is that the ‘Partition 1 Status’ sensor became greyed out, meaning it has got completely unavailable to HA. So it is not the state what is ‘unavailable’, but the sensor itself has got unavailable/ disappeared from HA after the initial unknown state.

Edit: the virtual zones actually get populated with the correct state after about 10-12 minutes of the initial startup. If I flip something during that period, then they get populated quicker.

I’ll have to check later. I’m on my way out of town and will check later this evening.

Ok, I checked. I think the “unavailable” and greyed out is confusing. That is normal when the sensor is showing “unavailable”. It means you have zones open. You can test this by bypassing all open zones by sending cmd “*199#” . The partition should then become “ready”. I can change it to show “not_ready” instead and see if that helps.
Anyhow, I’ve pushed an update that re-adds the refresh to get the current status on startup and also changed status “unavailable” to “not_ready”