Visonic Powermax and Powermaster Component

Sorry, forgot to answer this. No they won’t be stuck in a buffer. It’s more likely that the act of putting the panel in to downloading mode stopped them.

On the panel side it’s their “official” RS232 interface, and then connected via an RS232 ↔ Ethernet converter, I did double check and all timeouts are disabled there.

One thought though, on the TCP side in your component, do you send TCP keep-alives? Maybe it’s the TCP side that’s shutting down?

Yeah it could be, although I think in that case I’d expect a Powerlink error on the panel, although perhaps at that point we haven’t got to the Powerlink state?

Apologies, I probably should have explained the logs a bit better!

So “test.py-restarted.log” was me restarting test.py without touching the panel, that’s the one with the 3 Powerlink keep-alives at the top, it does the Download but then 8 minutes or so later we’ve not got Powerlink mode, only Standard Plus.

“test.py-panel-restarted.log” is the same run of test.py, but I went to the panel and restarted it (enter Installer menu, back out). So it’s a continuation of the above log really and you can see at around the 23 minute mark (line 520 or so) where I restart the panel and as soon as it restarts the Powerlink keep-alive is sent and we get Powerlink mode.

Then “test.py-stopped-again.log” is still the same run and we get to about 50 minutes before it stops.

Yep no problem, will give that a go!

Would they come that quickly though? They all appear to be received simultaneously?

If that’s expected (or at least nothing weird about it) then thinking this through:

  1. test.py starts
  2. it gets Powerlink keep-alives (so we’re “in” Powerlink mode)
  3. we send a STOP (would that stop them?)
  4. we do a Download (we’re “in” Standard Plus mode)
  5. we don’t see any more Powerlink keep-alives, so as it stands we never get to Powerlink mode.

So do we need to do anything after the Download to make them to restart?

That’s part of the operation that I’ve asked you to update, to change the 3 to a 2, so these are sent more frequently. Other users have had problems with the “I’m Alive” message when in Powerlink and it causing problems so when in Powerlink I only send MSG_RESTORE. In changing the 3 to a 2 you are altering this from 75 second period to a 50 second period to see if that helps.
Going back to my original question, I wondered if your PC was going to sleep and shutting off the network, that’s all.

You’re right, they do come quickly and it does confuse me, they may be buffered from somewhere but it may be specific to your set up, I’ve never seen that happen before.

To try and answer you specific items 2 to 5:

  1. It does :slight_smile:
  2. We could hijac the powerlink mode from the previous run but then we couldn’t do a download. We need the download to get the sensor details and the user pin code. So we’re not really in powerlink mode, the previous run was in powerlink!
  3. It may be. I need to send a stop just in case there is a download in progress, a stop (and exit) is the easiest way to get all panels in to a known state when my component starts, I think that:
    • the stop tells the panel to stop the downloading of any ongoing data stream
    • the exit then exits download mode
  4. I assume at the start that we’re in Standard and then attempt to download the EPROM to get to Standard Plus. Most (All?) panels will recognise the download command and do it without a problem. There isn’t much difference between that and powerlink.
  5. To go to powerlink we need to enroll with the panel. Your panel needs a manual enroll process, all later panels have auto enroll. If you remember a few days ago when we allowed auto enroll your panel had a problem.

If you have the panel settings correct, I believe that the panel starts sending a request to enroll which my component responds to. I’m not 100% on this last part but that’s what we saw a few days ago.

Change the 3 to a 2 and see where we get to, that should keep the comms going between test.py and the panel.

Yep I’ve done that earlier this evening :slight_smile:

Yeah that’s what I was getting at, but I was talking more about TCP at a low level when you open the TCP connection itself. This was really just based on past experience with Vera & openLuup. There were issues with the TCP socket closing seemingly randomly due to inactivity. In LUA we fixed it like:

sock:settimeout (OPEN_SOCKET_TIMEOUT)
sock:setoption (“tcp-nodelay”, true) – so that alternate read/write works as expected (no buffering)
sock:setoption (“keepalive”, true) – keepalive, thanks to @martynwendon for testing this solution

Just thought it was worth mentioning :slight_smile:

The StarTech RS232 ↔ Ethernet converter is supposed to be transparent so I guess it’s possible that if the client side disappears unexpectedly then something gets left in the buffer. Then when the client connects again that gets flushed.

I’ve just seen that test.py has stopped again at about 1hr 50min, so to check the buffer suspicion I just connected to the StarTech with a telnet client and got a bunch of data sent out. Upon disconnecting and then reconnecting again with telnet, there’s nothing there. So it does look like it has some remnants of the previous session if the client goes away.

Yeah I think this is where things are different with the Powermax+, from what I can see once I manually Define Powerlink on the panel itself, and the Powerlink “device” confirms (whether that be a real one or a pretend one), that’s it, it’s in Powerlink mode forever until you delete it on the panel. You don’t need to enroll again.

But if Powerlink stalls (comm error or client disconnect) then the panel shows a warning and stops sending the Powerlink keep-alives. A simple panel reset gets them going again.

In any case, I think we’re getting pretty close to a rough understanding of the Powermax+ behaviour so hopefully not much more to hassle you with!

I put the last 1000 lines or so of the latest log at Dropbox - File Deleted - Simplify your life

OK, that’s useful to know, that would explain a few of the things we’re seeing.

I’ve squashed another bug on the retry and uploaded release 0.3.4.4 to Github, this includes changing the 3 to a 2 so you don’t need to edit any code.
Although the resend bug is hopefully squashed, that doesn’t explain why it needs to resend the last command.
At 1:49:46.015205 we send an “I’m Alive” message to the panel and do not receive anything from the panel, we should receive an acknowledge (as per the previous 20 or so times in that file).
At 1:51:26.264914 my component decides to re-send the message and crashes.

I think I’ve fixed the crash but that doesn’t explain why the panel doesn’t achnowledge the “I’m Alive” message. This time it’s only 25 seconds or so after the last communication so I would expect the TCP connection to be OK but I can’t be certain :frowning: . (from 1:49:21 to 1:49:46).

Regarding these, I can look in to how to do these in python using the asyncio serial library…

OK cool, it’s late here so I’ve updated and will leave this running overnight. Currently the panel still has the Powerlink error (I’ve not restarted it yet) so we’re only running in Standard Plus … hopefully it will still be up in the morning :slight_smile:

Yeah I thought that was odd too, I suppose it’s possible the panel just stops replying as it’s “supposed” to be in Powerlink mode … but it doesn’t explain why the timing seems random when it stops.

I’ve just uploaded release 0.3.4.5 to Github. The change is in the way I connect to an Ethernet gadget, I now use the socket operations to hopefully keep the TCP connection alive when no data is being sent like this

    sock = None
    try:
        log.info("Setting TCP socket Options")
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.setsockopt( socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
        sock.setsockopt( socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
        sock.settimeout( 100000.0 )   # lots of seconds
        sock.connect((address, port))
        
        conn = loop.create_connection(protocol, sock=sock)
        return conn
    except socket.error as _:
        err = _
        log.info("Setting TCP socket Options Exception " + err)
        if sock is not None:
            sock.close()
    return None

Well, it works OK on my panel, at least I don’t see a difference :rofl:
martynwendon, please give it a try…

I’ve just uploaded release 0.3.4.6 to Github. It now also attempts to flush the receive buffer before doing the visonic protocol with the panel.

Nice :slight_smile:

Just a quick update from this morning, test.py stopped after about 40 minutes. No crash this time but seems like the panel stopped replying again.

Final few lines of the log below:

0:37:23.719262 < 1558> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:37:23.752470 < 1439> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:37:23.752986 < 2631> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 07 00 00 00 00 00 03 00 00 43
0:37:23.870370 < 1558> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:37:23.903164 < 1439> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:37:23.903707 < 2631> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 08 00 00 00 00 00 00 00 00 43
0:37:24.021952 < 1558> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:37:24.057039 < 1439> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:37:24.057648 < 2631> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 09 00 00 00 00 00 00 00 00 43
0:37:24.173036 < 1558> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:37:48.875489 < 1622> DEBUG [pmSendPdu] Resetting expected response counter, it got to 25 Response list before 0 after 1
0:37:48.877897 < 1558> DEBUG [pmSendPdu] Sending Command (I’m Alive Message To Panel) raw data 0d ab 03 00 00 00 00 00 00 00 00 00 43 0e 0a waiting for message response [‘0X2’]
0:39:29.104921 < 1605> INFO [SendCommand] Re-Sending last message I’m Alive Message To Panel
0:39:29.106239 < 1558> DEBUG [pmSendPdu] Sending Command (I’m Alive Message To Panel) raw data 0d ab 03 00 00 00 00 00 00 00 00 00 43 0e 0a waiting for message response [‘0X2’]
0:39:29.111575 < 1066> ERROR ERROR Connection Lost : disconnected due to exception [Errno 104] Connection reset by peer
0:39:34.117447 < 1075> ERROR No Exception handler to call, terminating Component…

I’ve updated to 0.3.4.6 and have set that running - again, I’ve not touched the panel yet to clear the Powerlink error, as I think it’s good to keep going like this to see how far we get?

Buffer flush seems to work well, following last nights stop, starting fresh test.py and we can see some junk data from the previous run:

root@ahs7 /usr/src/visonic/custom_components/visonic # python3 test.py -address 192.168.1.105 -port 1001
0:00:00.001225 < 3443> INFO Setting key OverrideCode to value -1
0:00:00.001903 < 3443> INFO Setting key PluginDebug to value True
0:00:00.002148 < 3443> INFO Setting key ForceStandard to value False
DEBUG:asyncio:Using selector: EpollSelector
0:00:00.006776 < 3467> INFO Setting TCP socket Options
0:00:00.020315 < 3478> INFO Buffer Flushed and Received some data!
0:00:00.021086 < 931> INFO Initialising Protocol - ************************************
0:00:00.021450 < 932> INFO Initialising Protocol - ************ TEST MODE *************
0:00:00.021696 < 933> INFO Initialising Protocol - ************************************
0:00:00.022771 < 1048> INFO [Connection] Connected to local Protocol handler and Transport Layer
0:00:00.023117 < 1635> DEBUG [ClearList] Setting queue empty
0:00:00.023593 < 1275> INFO [sendInitCommand] ************************************* Not sending an INIT Command ************************************
0:00:00.023902 < 1649> INFO [Start_Download] Starting download mode
0:00:00.025143 < 1623> DEBUG [pmSendPdu] Resetting expected response counter, it got to 0 Response list before 0 after 1
0:00:00.026121 < 1299> DEBUG [data receiver] Ignoring garbage data: 00 00 00 00 43 15 0a 0d a5 00 02 00 00 00 00 00 00 00 00 43 15 0a 0d a5 00 02 00 00 00 00 00 00 00 00 43 15 0a 0d a5 00 02 00 00 00 00 00 00 00 00 43 15 0a 0d a5 00 02 00 00 00 00 00 00 00 00 43 15 0a
0:00:00.026801 < 1559> DEBUG [pmSendPdu] Sending Command (Exit) raw data 0d 0f f0 0a waiting for message response [‘0X3C’]
0:00:00.027112 < 1561> DEBUG [pmSendPdu] Command has a wait time after transmission 1.5
0:00:01.529578 < 1559> DEBUG [pmSendPdu] Sending Command (Stop) raw data 0d 0b f4 0a waiting for message response [‘0X3C’]
0:00:01.530012 < 1561> DEBUG [pmSendPdu] Command has a wait time after transmission 1.5
0:00:03.032339 < 1553> DEBUG [pmSendPdu] Setting Download Mode to true
0:00:03.032822 < 1559> DEBUG [pmSendPdu] Sending Command (Start Download Mode) raw data 0d 24 00 00 56 50 00 00 00 00 00 00 35 0a waiting for message response [‘0X3C’]

Just another update!

After starting 0.3.4.6 this morning it lasted just 13 minutes :frowning:

No crash, but seems the panel stopped replying or the connection closed again:

0:11:05.179327 < 1559> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:11:05.241137 < 1440> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:11:05.241796 < 2632> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 07 00 00 00 00 00 03 00 00 43
0:11:05.331141 < 1559> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:11:05.373453 < 1440> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:11:05.374215 < 2632> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 08 00 00 00 00 00 00 00 00 43
0:11:05.482478 < 1559> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:11:05.523694 < 1440> DEBUG [Sending ack] PowerlinkMode=False Is PM Ack Reqd=True This is an Ack for message=0XA5
0:11:05.524363 < 2632> DEBUG [handle_msgtypeA5] Parsing A5 packet 09 09 00 00 00 00 00 00 00 00 43
0:11:05.633834 < 1559> DEBUG [pmSendPdu] Sending Command (Ack Long) raw data 0d 02 43 ba 0a waiting for message response
0:11:30.570791 < 1623> DEBUG [pmSendPdu] Resetting expected response counter, it got to 25 Response list before 0 after 1
0:11:30.571778 < 1559> DEBUG [pmSendPdu] Sending Command (I’m Alive Message To Panel) raw data 0d ab 03 00 00 00 00 00 00 00 00 00 43 0e 0a waiting for message response [‘0X2’]
0:13:10.807984 < 1606> INFO [SendCommand] Re-Sending last message I’m Alive Message To Panel
0:13:10.808738 < 1559> DEBUG [pmSendPdu] Sending Command (I’m Alive Message To Panel) raw data 0d ab 03 00 00 00 00 00 00 00 00 00 43 0e 0a waiting for message response [‘0X2’]
0:13:10.813813 < 1067> ERROR ERROR Connection Lost : disconnected due to exception [Errno 104] Connection reset by peer
0:13:15.820055 < 1076> ERROR No Exception handler to call, terminating Component…

So, as an experiment, I started test.py again and in a separate SSH session did a constant PING against the StarTech.

So far, we’re now at 5hrs and 47 minutes and counting … I’ve now stopped the constant PING, so will see what happens next.

If it drops again then I think that’s a reasonable indication that something is definitely timing out at the TCP level.

It’s possible that this might just be something specific to my setup (although doesn’t explain why it works with the openLuup / Vera plugin).

I was looking a little more at the Python socket library and think that setting the following may be useful to tweak the TCP keepalive (else it defaults to the OS defaults). If it stops again I will add these to the code and retest:

#might be linux specific options!
        sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 60) #60 seconds before first 
        sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 60) #60 seconds interval
        sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 10) #after 10 missed, it's dead

It certainly looks that way. I looked at the vera luup code and I’m not sure how the connection gets set up so I can’t check.

I did see these when I was looking at how to use the keep alive in python.
When you run a ping command on linux it runs every second, did you run it like this or did you change the interval? Perhaps set the 60 values to 30 or less to make sure, 60 seconds may be too long. Perhaps also set the TCP_KEEPCNT to 1 so we know that if it fails it only gets 1 chance and stops the connection. Otherwise we wouldn’t know if it was failing sometimes internally and it doesn’t tell us. Does this make sense?

Yeah just a normal 1 second ping … not really sure what to make of it now though as it’s still going @ 8hrs 44mins even though I stopped the ping command after I posted before!

Will leave it as-is for now and see how long it lasts, then make the changes to the keep alives (will use 30 seconds / 1 try as you mentioned).

Something else I was looking at is how the openLuup / Vera plugin works with the MSG_RESTORE, which seems the opposite way around. Over there it checks to see when the last Powerlink keep-alive was received and if there’s not been one for 60 seconds then it sends a MSG_RESTORE:

local delta = now - (pmLastKeepAlive or 0)
debug("Checking last alive message (delta = " … delta … “)”)
if (delta > 60) then
– Let Powermax know we are alive (and reset Powerlink communication error)
debug(“Clear Powerlink communication error”)
pmSendMessage(“MSG_RESTORE”)

This seems to imply that the MSG_RESTORE is a way to clear any Powerlink errors, i.e. restart the Powerlink keep-alives?

But I think that you mentioned that you don’t send a MSG_RESTORE until you see a Powerlink keep-alive.

It may explain why I hardly ever saw any issues with Powerlink mode there, perhaps because if ever the keep-alives stopped, the plugin was self-repairing as such - I think it tries a handful of times, before generating an error and stopping the plugin. I probably only ever saw that a few times a year.

So if / when test.py stops again I’d like to try sending a MSG_RESTORE after we’ve done a Download and got to Standard Plus, to see if that restarts the Powerlink keep-alives by itself. Is an appropriate place to add self.SendCommand(“MSG_RESTORE”) at the end of the gotoStandardMode(self) function?

In the surrounding lua code that you mention, it needs to be in the powerlink state in order to do that. I have a similar function when “triggerRestoreStatus” is called, if in powerlink then MSG_RESTORE is sent, if not then MSG_STATUS is sent to the panel. The triggerRestoreStatus function is called when needed including when no communication has taken place for a defined period of time. I’ve also changed the logic from lua and included what I call “Standard Plus” although it is more like powerlink than standard, sould I call it Powerlink minus :rofl: :rofl: . I download the EPROM first where as the Vera lua code attempts to enroll and attain powerlink first. I realised that the Visonic programmer windows application uploads and downloads the EPROM and this was 99.9% reliable, getting in to powerlink isn’t. I used to have a Vera and it wasn’t that reliable for me, it would lose it’s powerlink connection circa once or twice per week and go to standard. Nothing to do with the hardware (exactly the same as I have now for lots of years). With this, once it is in Standard Plus, I have the user pin codes, the details about the sensors etc etc, to me I find that powerlink is a nice to have but not an essential, once I get to Standard Plus i’m happy. I also don’t have the problems I used to have with the Vera, I find that once it gets to Powerlink it’s much more solid, I can leave it alone for weeks and it stays in Powerlink. Not that it’s been left that long lately :rofl: . Anyway, I digress a little.

Almost Certainly :smile:
I’m not sure about it restarting the keep alive messages from the panel, I find sometimes yes and sometimes no, see below. It could do that all the time for a Powermax+ though.

From what I understand from reading the various forums, panel enrollment is what is supposed to start the panel sending Powerlink Keep-Alive messages. For newer panels than yours I auto-enroll, the panel starts sending Keep-Alive messages and then I respond back with MSG_RESTORE instead of MSG_STATUS. I know this works and works well so I’m not going to alter it. In your case for a Powermax+ I have assumed that you would manually enroll on the panel to start the Keep-Alive messages. See below…

Also, as a side note, if the user sets ForceStandard in the config file I need to actually prevent it going in to powerlink or downloading the EPROM. The user has explicitly said, do not do this (usually due to security reasons, their HA host may not be secure enough, it’s not for me to say or judge). So I cannot randomly send MSG_RESTORE hoping that we get to powerlink.

I have uploaded release 0.3.4.7 that should send a MSG_RESTORE at the correct point in the sequence, see lines 1205 to 1213 in pyvisonic.py. It will do this at 180 second intervals. I might change this later but lets see what happens with this change :smile:

What I’ve found is that as Visonic has developed their software over the years they have tweaked it, and together with the different speed CPUs in the different panels (I assume) we don’t get consistent results in the various experiments several users have done with me over the months and years. We just have to try to make the Component generic enough to work well enough with as many panel variants as we can.
I hope these ramblings make sense, it’s getting late here :slight_smile:
D

I realised that I had not made a way out for it to end so I’ve altered it and uploaded 0.3.4.8, the lines you’re looking for now are 1191 to 1198 :slight_smile:

EDIT:
Note that I’ve made the POWERLINK_TIMEOUT value stupidly long for now just to experiment.
Also, for info “gotoStandardMode” is giving up on getting to powerlink and settling for one of the 2 standard modes.

So Dave, this is off topic but how can we send you a small Christmas gift/donation? You gave my Visonic alarm new life and for that I want to say thank you.

So just as an update, the test.py run from my previous post lasted until about 10hr and 30min before stopping. So last night I made the changes to the keep-alive TCP timings and restarted … that lasted for about 5hr 40min.

As it looks like this is something specific to my setup (since nobody else has mentioned similar), I think at this stage I’ll just play around with the timings a little and see what happens, rather than hassling you anymore with it!

Thanks for the changes and pointers on that so far though, I’ll report back if / when I get a breakthrough. For now I’ve set the timeouts to 15 seconds and restarted.

Oh yes for sure, I wouldn’t ask or expect you to change anything that would impact yourself or others! I’m quite happy to fiddle around with the code locally, I mentioned originally that I was actually OK with Standard mode anyway, I was just trying to understand the different behaviour compared to the openLuup / Vera plugin.

Yeah and then you factor in the different connection methods, host environments, etc and it makes things even more complicated!

Cool, I will see how the current run fairs with the TCP timeouts set at 15 seconds, then update to 0.3.4.8 version and see what happens. Quite interested to see if the MSG_RESTORE clears the panel error and restores the Powerlink keep-alives, or whether I still have to restart the panel.

I want to do that too - and I have not even gotten around to connecting my HA with my Visonic panel yet :slight_smile:

Oh no, please keep me updated as to how you get on. I looked up

ERROR Connection Lost : disconnected due to exception [Errno 104] Connection reset by peer

This essentially means that the other end of the TCP connection closed the communication down, I know you’ll have probably done this already but have you checked the settings of the device. Also, is your network getting flooded with traffic i.e. are you streaming video like Amazon Prime or Netflix at the time when it fails? Your router may take the video stream as higher priority and drop packets. Could you use wireshark to try and see what’s going on. I realise it’s difficult as it takes hours for it to stop working. Just some thoughs :slight_smile:

Best of luck :smile:

Hi folks, it’s really kind of you to offer but I don’t do it for the money, in fact quite the opposite. I do it because I have an interest in this kind of thing and this particular component is something that I thought I could help the HA community with. I’m also a big supporter of free software in general and sharing code without royalties or payment, one of the many reasons I moved from Vera to HA (there were other reasons that we won’t go in to :slight_smile: ).

D

1 Like

Yeah the device itself is pretty simple and there’s not that many settings on it. Thinking about this a little more, I wonder if it’s something that we’ve changed that’s the cause of these disconnects. I know that nobody else has seen this, but:

  1. I didn’t have any disconnects like this under openLuup / Vera plugin
  2. If you recall, I set up the HA Docker instance running the component (version 0.3.3.10) about 10 days ago and it was running fine for several days (no disconnects). My initial message wasn’t about any particular issues, just the different behaviour with regard to Powerlink
  3. When I started running test.py for testing, from a different linux machine, was when we started to see the random disconnects
  4. Today, putting the disconnects aside for now, I wanted to run the new code to test the MSG_RESTORE feature, which works (will comment on that a but further down), but a few hours in running test.py and it disconnected again
  5. Thinking perhaps it’s something on this linux machine, or something while running under test.py, I decided to take the plunge and update my Docker HA instance to the latest HA version and load the latest 0.3.4.8 code there - after all on that, the original component (0.3.3.10) seemed to be running fine
  6. The HA Docker instance now also disconnects randomly :frowning:

Not quite sure what to make of that now, but will give it some further thought! I think perhaps I might pull down 0.3.3.10 on my test linux box and run test.py from there - I know it won’t have all the latest goodies in, but if it runs forever without stopping then that might be an indication that we’ve impacted something somewhere … again I know it only seems to affect me at the moment!

So, with the MSG_RESTORE thing - you’ll recall that my panel has had the error message on it since some time last week … we got it enrolled I think on Friday but one of the test.py crashes stopped the Powerlink keep-alives, which I know I can restart by going in and out of the Installer menu, but I’ve held off doing that. 0.3.4.8 recovers the panel every time … that is, every time test.py stopped today (either intentionally or not), upon restarting test.py it’s cleared the error on the panel and the Powrlink keep-alives have resumed. Great stuff!

I put the log of the first run of test.py this morning at Dropbox - File Deleted - Simplify your life

Back over on the Docker HA instance (which as mentioned above is now updated) when the component stops, I do see a “… attempting reconnection” in the log … but nothing further? Should it automatically try and reconnect? If I manually call the service “visonic.alarm_panel_reconnect” then it does successfully reconnect - which is great!

I’ve put the log from the Docker HA instance at Dropbox - File Deleted - Simplify your life - this contains two stops of the component, with a manual reconnect by calling the service after the first stop.