Local Deployment for SureFlap / SurePetCare Connect using only local MQTT Broker

So … big issues here! :rage:
I needed to add a new flap, so I connected back to the cloud, configured everything, switched back to PHL (which I’ve updated to the latest version meanwhile) and ran a new ‘pethublocal setup’ to get the new config.

All ran fine for around 1.5 hours when suddenly, out of nothing, the hub changed to red flashing. Periodically it turns to 2-3 green flashes and then back to red. During this, I see it requesting the certificate in the server log which is sent back to it. And so on.

Any ideas? Looks I need to switch back to the cloud tomorrow because of this :pleading_face:

It’s possible that you got the new firmware that prevents this from working. If you look upthread, @peterl posted the trick for resetting back to the old firmware. IIRC, hit the reset button and plug it back in, or something like that.

Nah, I still have 2.43 and I even reflashed the hub to be sure. With another FW it would not reconnect when I just revert to the old snapshot without touching the hub.

As I did three changes:

  • My extension to get the Mac of the updating device and timestamp in the pets
  • Update from 2.0.0 to 2.0.1
  • Added the second flap

I try to revert the first two one-by-one to see if I can find out what’s causing this strange state. Unfortunately Peter seems to be busy, he’s the only one who might have an imagination how this strange state can happen … or at least what debug info or tests are needed.

I’m tracking this in a Github issue: https://github.com/PetHubLocal/pethublocal/issues/11

It’s an own outgoing MQTT status message from the hub which is echoed by Mosquitto (normal MQTT behavior as the hub is subscribed to the topic) which killed the hub.

As the messages is not acknowledged by the hub because it crashed with this message and the messages are QOS1, it’s queued in Mosquitto. Once the hub reconnects, Mosquitto immediately publishes this unacknowledged message again to the hub (as it’s the same client) - which then leads immediately to the hub crash and restart again. And so on, until you stop Mosquitto, delete the persistent data (mosquitto.db) and restart Mosquitto.

I’ve got the message raw data via MQTT explorer and relating to Peters register table it’s stupid (5 registers written from 71). That’s 2/3 of the RSSI value and 3/4 of the “last heard” field … does not make sense.

When this “message of death” is gone from the queue, the hub immediately reconnects (until it happens again).

I have no clue why my hub did these stupid things after I’ve added a second flap. The only thing what wondered me, is that the cloud import of this new flap had no “Curfew” attributes in the cloud start file. I’ve added them manually to get over the PHL error because of this.

I don’t know if the hub itself has some “non-volatile” configuration data stored. I’ve reconnected the hub to the cloud, added and removed a curfew for the new flap, let is settle for 30 mins and did a new setup from cloud. This time the curfew attributes have been in the cloud file. Let’s see if there is a difference now, or if there still are “MQTT messages of death”.

@flyize As you have a similar setup like I’m planning with two flaps, check my pull request for PHL. I’ve added the update timestamp and update MAC to the pet sensor, so you’ll be able to determine the exact position via HA automations based which flap (MAC) has reported the location when the time attribute changes. Works pretty well :slight_smile:

Let’s see what @peterl says. Never did Python before :stuck_out_tongue_winking_eye:

Hi @jacotec awesome work. Having a look through it and I am not sure you want to grab the Mac_Address like that as it won’t work for the Cat Flap.
Using the last value of the MQTT topic is far better as that is the device that created the message:

DeviceMac = str(mqtt_topic_split[-1])

Or something similar to that would work.

1 Like

It works for the flap (as I only have two flaps :wink: ) … but yours is much more elegant (and safer). I’ll test it right now (as mentioned, never did Python before and I did not know about this function)

@peterl Great suggestion, thanks! It works like a charm and I’ve updated my PR to reflect your proposal.

1 Like

I just realized that I could have my automations send an MQTT message to change the state of the pet, right? That means I can actually start using this!

Which is good because it seems my hub is having connection issues again…

So I’m getting red flashing ears with the HA Addon.

I’ve confirmed:

  • Hub connects if I remove DNS entries
  • With DNS entries added, I can browse the pethublocal page
  • Hub shows online on pethublocal page
  • JSON seems to indicate that I have 2.43
  • No MQTT entries in HA

Also tried a manual install. It works, but it doesn’t seem there’s any obvious way to relay the messages to my HA Mosquitto install. And obviously, I’d prefer to just keep it all in HA if possible.

Suggestions?

If the hub has seen the genuine surehub server, the very first thing it will do is update to the latest firmware.
Re-instate the DNS poisoning and then restart the hub with the button on the bottom held down. This will force it to ‘update’ from pethublocal.

It’s connected right now to the manual install, so its not the latest firmware.

Hmm not sure then.
This may sound daft, but if you disconnect the power to the hub, does the pethublocal page update to show offline? Just in case you’re getting a false positive if you see what I mean.

I’m pretty positive its connected. For several hours I had it pointed at the HA addon using DNS. I unplugged it, changed DNS to point to the manual install, powered it back up, ran through the manual install. Within about 5 minutes, the flashing red ears that I’ve been seeing all day turned off.

Oh! I forgot that initially I saw it as offline. Then it was online and the ears turned off. Yeah, its 100% connected successfully to the manual install.

So you know when using the Addon you need to point the DNS to the HA host running Supervisor / HAOS and not another host.
This might be an oversight on my part with the AddOn as the HTTPS endpoint on 443 to download the credentials and the MQTT Broker on 8883 need to all be on the same host. You also need to reconfigure the main HAOS MQTT Broker Addon on 8883 to be listening on a another port ie 8882 as per the documentation as the HAOS MQTT Broker listens on 8883, but doesn’t work with the MTLS requests from the hub and I assume that you are using the built in HAOS MQTT Broker and then the PHL addon deploys it’s own Mosquitto broker with the correct MTLS configuration and sends all requests to the main HAOS MQTT Broker addon:

Merged your PR @jacotec as it looks good (yay another PR from someone else :slight_smile: )

1 Like

Yes, I have DNS pointing at the HA server running the addon. I’ve confirmed that 443 works using the CURL command you posted upthread. And I’ve also changed the HA Mosquitto port to be 8882.

Actually, I must have gotten confused in my testing. With the HA Addon enabled, the hub shows OFFLINE at http://hub.api.surehub.io/index.html

While I don’t exactly know how to use Wireshark, I’m wondering if I’m seeing the same thing posted here

Not necessarily. When I changed to PHL weeks ago, my hub was still at 2.43. Adding a second flap last week with the hub connected to the cloud for 30 mins, I’m still at 2.43.

Looks pretty much like this for me.

But if your manual install runs fine, what’s wrong with running it like this? That gives you more freedom to adapt changes as well. I’m running it in my NodeRed VM, so I have no need to run it in HA itself.

Project “Gate Flap” completed :blush:




2 Likes