A way to loop through entities at reboot and publish a topic?

Hi.
I have many mqtt sensors/switches/binary_sensors that come from zipato zwave controller (that publish any single attribute instead single device with attributes…), so a lot of entries.
To get the current value without waiting for a topic published from the controller, I built a simple automation that hear for a specific topic (that come from the controller in response of a request topic).

  • controller publish on zipato/zipabox/attributes/UUID/value (state_topic of my ha entities)
  • if I publish an empty payload on zipato/zipabox/request/attributes/UUID/getValue…
  • …controller reply on zipato/zipabox/attributes/UUID/currentValue

Obviously after a reboot, all the current values are lost (I have no control in the way the zipato controller publish the messages,so I can’t set a retain flag) and I have to find a way to recover actual values.
This is already done with the automation stated above.
My problem is now: how to place an mqtt.publish on topic zipato/zipabox/request/attribute/UUID/getValue for each entity without doing a script filled of manually entered UUIDs ?
I have it in my mind, but as a newbie in home assistant and jinja… I’m unable to translate on code.
something like this following pseudo-code, to be launched with an automation after a reboot…
for each itm in entities
mqtt.publish(‘zipato/zipabox/request/attribute/{{itm.state_topic.split(’/’)[4]}}/getValue’,empty_payload)
end for

I have to to this with appDeamon? py scripts? there’s a way to build this natively ha?
@francisp alredy pointed me in the right direction for the automation… Can anyone else show me the correct way to follow? :pray:

@ExTrEmE
I have the same problem, as mentioned in the other post, I guess you didn’t find a solution?
Would it not be much easier to try to solve the issue in a completely different way: Have a middleman inject the retain flag in the MQTT message which Zipato sends before you read the message in your integrations?

- alias: 'MQTT_Inject_Retain_Flag'
  trigger:
    platform: mqtt
    topic: zipato/zipabox/attributes/59e5d4bb-7f4e-4c10-b403-10dc1780b8f5/value
  action:
    service: mqtt.publish
    data_template:
    topic: zipato4HomeAssistant/zipabox/attributes/59e5d4bb-7f4e-4c10-b403-10dc1780b8f5/value
    payload: "{{ value }}" 
    qos: 2
    retain: true

And your integrations would then listen in the topic ‘zipato4HomeAssistant’ instead of ‘zipato’.
To not have to create this middleman for every UUID we would need to find some logic that takes the topic on which Zipato publishes as a variable, adjusts the variable (to make it Zipato4HomeAssistant) and then use the variable as the topic to which it publishes.
If Home Assistant itself can’t do that, I have the impression Node-Red can do it (see here and here)

Perhaps something like this:

- alias: 'Convert Zipato'
  trigger:
    platform: mqtt
    topic: zipato/zipabox/attributes/+/value
  action:
    service: mqtt.publish
    data:
      topic: "z2ha/{{ trigger.topic.split('/')[3] }}/value"
      payload: "{{ value }}" 
      qos: 2
      retain: true

All existing zipato entities will need to have their configuration modified to use the new topic structure. For example:

z2ha/59e5d4bb-7f4e-4c10-b403-10dc1780b8f5/value

Obviously, this technique may not be feasible if all entities are created by MQTT Discovery which sets the entity’s topics.

@123 thank you very much.
With a few tweaks to the payload your code got works:

- alias: 'Add retain flag to Zipato MQTT messages'
  trigger:
    platform: mqtt
    topic: /Zipabox/attributes/+/value
  action:
    service: mqtt.publish
    data:
      topic: "/z2ha/{{ trigger.topic.split('/')[3] }}/value"
      payload: "{{ trigger.payload }}" 
      qos: 2
      retain: true

However before pointing my integrations to this new topic, I monitored the messages on my desktop (MQTT explorer) to check whether it is working reliably.
Unfortunately it sometimes skips messages and this in a seemingly random way!
It is monitoring everything under the topic /Zipabox/attributes/ and at the moment of writing there are 244 messages in 42 topics.
However under the destinations topic /z2ha/ I only have 191 messages in 40 topics.

To give an example the message {"value":"Cloudy","timestamp":"2020-11-16T19:57:59Z"} in the topic /Zipabox/attributes/114ca865-6e88-43b6-a594-7f1d53574904/value was not forwarded.

And this message {"value":0.0,"timestamp":"2020-11-16T20:05:24Z"} in another topic was not forwarded while in exactly the same topic, this message was forwarded: {"value":0.0,"timestamp":"2020-11-16T20:06:36Z"}

That doesn’t make any sense does it? Obviously I can’t use it if it randomly decides to skip messages. What could be the reason?

Probably because only “191 messages in 40 topics” have been published since you enabled the automation.

Restart the MQTT Broker. That will erase all topics that do not have retained messages. Check periodically and see if the count of messages and topics in /Zipabox/attributes/ increases to match the ones in /z2ha.

BTW, it’s not the most efficient practice to start an MQTT topic with a forward-slash. It means the topic’s root has no name. In other words, it needlessly uses an additional level of hierarchy.

From here:

Best practices

Never use a leading forward slash

A leading forward slash is permitted in MQTT. For example, /myhome/groundfloor/livingroom . However, the leading forward slash introduces an unnecessary topic level with a zero character at the front. The zero does not provide any benefit and often leads to confusion.

@123 thanks for the tip about the leading forward slash, I will see whether I can change that on my Zipabox which is publishing the original messages (limited options as I can’t even set the retain flag).

I should have specified when describing the problem that I did reboot or at least made a fresh connection to the server.
The situation varies. Sometimes it almost immediately drasticaly goes out of sync, sometimes it stays in sync for many minutes.
Below I compared the test topic (z2hasdfghjkl) with the original topic (Zipabox/attributes) and compared the number of messages, over a period of half an hour. The first minutes it did not mis any messages, only after about 5 minutes it skipped 11 messages in a fraction of a second.
Then it only missed one message on rare occasions, untill about 20 minutes later when it suddenly skipped 12 messages.


Mind you this is a very good result, on other occasions I counted allready 100 missed messages in 20 minutes or so. I get the impression if misses more messages when there are a lot of them coming in in a short time.

Are there any warning messages in the log related to the automation?

The automation’s mode is single. This is the default setting for all automations and scripts. It means that if the automation is triggered and is busy executing its action and another trigger occurs, it ignores the second trigger (because it’s still busy finishing the action).

If there are multiple, nearly simultaneous MQTT Triggers, then mode: single will process the first trigger and ignore all others until it has finished the action. Maybe this is the reason for the ‘missed messages’.

I suggest you try changing mode to either parallel or queued (then execute Reload Automations). In parallel mode, instead of ignoring the next trigger while it’s still busy with the first one, a completely separate, parallel instance of the action is created to handle the second trigger. In queued, the second trigger is not ignored but made to wait until the automation finishes the action.

@123
You made me think about this error which I keep getting (and haven’t been able to get rid of):

Login attempt or request with invalid authentication from 192.168.0.164 (192.168.0.164) (okhttp/3.7.0-SNAPSHOT)

But that is the IP of my zipabox (which sends the MQTT messages of all my z-wave devices and thus needs to connect to the MQTT server add-on of Home Assistant) so that cannot be the reason. Much obliged if you know how I could fix that error.

But then I found some other logs where I found the warnings you are referring to.
I added mode: parallel and the result is much better, but still not perfect and I still got some warnings.
Increasing the max parallel executions from the default 10 to 50 by adding max: 50 fixed the last warnings and my latest test shows 17000 MQTT messages without any missed messages :smile:

Unfortunately I disovered that my zipabox sometimes only sends trough 30% of the MQTT messages. This is solved by rebooting the zipabox. So I might have to add a solution to reboot the zipabox every night.

I also discovered that obviously the automation does not run when I’m rebooting home assistant. During that period none of the MQTT messages are forwarded to the new topic. These will therefore never by read by my MQTT integrations. Not ideal, but not the end of the world either I guess.

I discovered that the battery level of my smoke sensors are published in a different kind of topic so I made the automation to add the retain flag more generic. Here is the new code:

- alias: 'Add retain flag to Zipato MQTT messages' #This automation reads all Zipabox MQTT Messages and forwards them to another topic (used by the HA Integrations) but with the added retain flag. This way when HA reboots, it remembers the state of my devices.
  trigger:
    platform: mqtt
    topic: /Zipabox/+/+/+
  action:
    service: mqtt.publish
    data:
      topic: "/z2ha/{{ trigger.topic.split('/')[2] }}/{{ trigger.topic.split('/')[3] }}/{{ trigger.topic.split('/')[4] }}"
      payload: "{{ trigger.payload }}" 
      qos: 2
      retain: true
  mode: parallel #To be able to process multiple payloads simultaneously
  max: 50 #To increase the max number parallel processes from 10 (which still resulted in skipped messages) to 50

Thanks again for the excellent advice you’ve given @123 this has really helped me a lot to solve the problems and put me on the right track to get to know Home Assistant better.

PS: I checked on the leading ‘/’ and my zipabox does not allow to get rid of that, so I’m stuck with this way of working.

You’re welcome but let’s not overlook the fact that Zipato should provide the ability to publish retained messages. I recommend you contact them, explain the situation, and ask that they add the feature.

@123 Using Zipato’s zipabox has been a fun introduction to the world of domotics. However it is crystal clear to all users that they overpromise and underdeliver and have a strange way of treating their clients.
They consider that people who bough their Zipabox but don’t buy any of their additional features are not their customers.

They have now migrated to a new platform and older devices like my Zipabox (stopped being on sale about 3 years ago) do not support the new platform. People who decide to buy a new device and migrate to the new platform have to write all their automations from scratch.

I decided that rather than buy their new solution and write everything from scratch I’ll install Home Assistant on my Synology NAS and start learning how to use that. I have a much higher faith in the future of Home Assistant than Zipato.

The MQTT integration is a safe way for me to keep the Zipabox in control in the short run and migrate my automations one by one to Home Assistant.
Once that is done and the only functionality remaining in the Zipabox is the actual connection to the Z-wave devices, I need to buy a z-wave stick and figure out how to migrate my devices properly to Home Assistant. My understanding is that I need to redo the integrations (but assume it should be easier), but hope I can then give them the same name so that my automations etc keep on working.

1 Like

@123 today I faced the issue that the automation you helped me create to forward incoming MQTT messages to a new topic with an injected retain flag didn’t work properly.
It did still forward some messages but not all. I tested with some lights and it consistently refused to forward the incoming messages. It did see them as I saw in the logbook that the automation was triggered by an incoming message in the topic of the light.
A reboot of Home Assistant solved the problem.
Does that make any sense to you? (I’d like to avoid this in the future, it took me hours to understand that a rule I’m designing in node red was not working because the automation malfunctioned.

Based on the performance you have described over several posts, I think we have to conclude that this technique is just a giant kludge and cannot be relied upon to mitigate the Zipabox’s lack of support for retained messages.

You said this was simply a transitional solution because you were moving away from the Zipabox. Now you have incentive to discard it even faster.

Get the zwave stick and migrate your zwave devices to it (the sooner the better). Good luck!

As I wrote in another topic, I worked around the loop… I simply made a script with 80 entries, so 80 service call to mqtt publish.
All done manually (semi manually using excel, notepad++…).
obviously any additiona entries neet a manual addition… but I do not plan to add other devices to the damn zipabox.

Oh. Be careful with retain flag, probably is able to cause more damages than benefits.

I believe what you are alluding to is when you publish a retained message to a command topic. That can cause undesirable behavior when the device restarts, re-subscribes to its command topic and receives a command payload (re-executing the previously received command).

Publishing a retained message to a state topic is beneficial and rarely causes so-called “damages”.

@123 what I’m alluding is something like:

  • Zwave controller publish the state to the broker
  • HA republish it as retained (retained by the broker!)
  • HA loose connection (power, network, etc)
  • a device change its state (example a window, a door) and the zwave controller publish it without the retain flag.
  • HA reconnect and the broker send the retained state, that do not match with the real state.
  • everything can happen… alarm that start, rules that start without real reason and so on.
    Rare, ok. But can happen… and probably will happen exactly with the wrong timing :slight_smile:

Rare indeed.

Replay your scenario without using retained messages. When HA reconnects it gets the status of … nothing. It won’t have the correct status of anything until it gets published.

sure, without retain, no issue in my scenario ( but no state)… and in case of sensors that update only when changes occur… it’s not so good.
my way (calling the getvalue after boot for each entity) do its work and update all the values without issues :slight_smile:

What are the sensor states during the short period between startup and when your automation executes to get their states?

All unavailable.
In less than 30sec I recover all the values (I placed a 250ms delay after each call to stay safe and not “overload” the zipabox)

Precisely. Guaranteed to happen on every startup (versus the tiny window of opportunity for the situation you postulated above).

Even Home Assistant’s ability to store/restore an entity’s last known state can’t remedy this situation; on startup the entities states are unavailable. One consequence is that automations with State Triggers need be explicitly defined to avoid triggering when the entity’s state changes from unavailable to a known state.

As i see it, the combination of the two techniques (retained messages and a requesting all entity values) would cover each other’s deficiencies. All entities instantly acquire states on startup (due to the retained messages) and the addition of requesting their states shortly after startup would ensure the states are truly up-to-date.

Or replace the Zipabox with something better because everything we’ve discussed is a giant bandaid to compensate for its inability to publish retained messages natively.

1 Like