Check State of multiple entity_ids using function node to send multiple messages

I’m trying to make NR log the changes it made to entities to their corresponding HA logs. In Check State I’m using {{ payload }} to pass the entity dynamically from the function node. If I’m only triggering one entity in the service call it works. If I call multiple entities in one service call I want it to get state of each entity separately. So I get each entity_id and send it as a separate message. I expect to get the state of each, but instead I get nothing.

The function node logs them individually and I can see the Check State is getting triggered as the timestamp and the state gets updated, but it doesn’t return any output using the same logic as a single entity just in a loop. Why am I not getting any output? Is there a better way of doing this?

I’ve attached my flow below.


[{"id":"50815637d537c3c4","type":"api-call-service","z":"b1520e6b7f9039b4","name":"Turn Light Off","version":5,"debugenabled":false,"domain":"light","service":"turn_off","areaId":[],"deviceId":[],"entityId":["light.lightstrip"],"data":"","dataType":"json","mergeContext":"","mustacheAltTags":false,"outputProperties":[{"property":"data_log","propertyType":"msg","value":"","valueType":"config"},{"property":"data_sent","propertyType":"msg","value":"","valueType":"data"}],"queue":"none","x":240,"y":1160,"wires":[["4953abfe19873e35"]]},{"id":"880b5b2b1158bfd2","type":"api-current-state","z":"b1520e6b7f9039b4","name":"Check State","version":3,"outputs":1,"halt_if":"","halt_if_type":"str","halt_if_compare":"is","entity_id":"{{ payload }}","state_type":"str","blockInputOverrides":false,"outputProperties":[{"property":"payload","propertyType":"msg","value":"","valueType":"entityState"},{"property":"data","propertyType":"msg","value":"","valueType":"entity"}],"for":"0","forType":"num","forUnits":"minutes","override_topic":false,"state_location":"payload","override_payload":"msg","entity_location":"data","override_data":"msg","x":550,"y":1160,"wires":[["998c70643fb5eae8"]]},{"id":"88c93c38ea67813c","type":"inject","z":"b1520e6b7f9039b4","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"light.lamp","payloadType":"str","x":400,"y":1100,"wires":[["880b5b2b1158bfd2"]]},{"id":"4953abfe19873e35","type":"function","z":"b1520e6b7f9039b4","name":"function 2","func":"var entities =;\nif (Array.isArray(entities)) {\n entities.forEach(e => {\n msg.payload = e;\n node.warn(msg.payload);\n node.send(msg);\n })\n node.done();\n} else {\n msg.payload = entities;\n node.send(msg);\n}\n\nreturn msg = null;","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":400,"y":1160,"wires":[["880b5b2b1158bfd2"]]},{"id":"998c70643fb5eae8","type":"debug","z":"b1520e6b7f9039b4","name":"Full Msg","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":700,"y":1160,"wires":[]},{"id":"6d460b01ae9c0f39","type":"inject","z":"b1520e6b7f9039b4","name":"","props":[],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":125,"y":1160,"wires":[["50815637d537c3c4"]],"l":false}]

I’m not sure what’s wrong with your function but I can suggest a much easier way. See here:

If you follow that approach Kermit laid out then every time Node RED is the source of a state change the state change event will have context.user_id set to NR’s user ID.

So instead of putting all that after every call service node you can just have one Events: all node that listens for the state_changed event. For every state_changed event check msg.context.user_id:

  • If it’s your NR user - log it
  • if not - ignore it

I’m running Supervised HA in a VM. So my NodeRED is setup as an add-on. I’m a bit lost on connecting NR to HA using the ID. Could you provide more detailed instructions please? I’m assuming the new “NodeRED” user needs the ability to login so I can go into the settings and generate the access token from there? How do I make NR use that user then?

What I’m really after is to have my log entries show a custom message and that it was made by NodeRED instead of Supervisor

Go to the menu in the top right of NR and click on “configuration nodes”. Double click on Home Assistant. Uncheck “I use the Home Assistant add-on” and paste in the access token you created. Don’t change anything else, just save and deploy.

context.user_id won’t actually be the username you picked, it’ll be a guid. So make Node RED cause one state change and look at it with a debug node. Copy that user ID and use that in the logic I laid out above.

When I update it with just a token I get an error saying:

Invalid format of Base URL: http://supervisor/core

So I changed the URL to http://hassio:8123 and http://<ip_address>:8123 and tried without the port, but it will not connect

So when I do http://supervisor/core I get the same message you do. Which seems like a bug tbh as that’s a valid URL and would work here.

I don’t know why the other two don’t work, they seem like they should. I haven’t actually tried changing the config node like this myself for reference, I linked @Kermit 's post above. He made the node-red-contrib-home-assistant-websocket package with all the Home Assistant nodes. Perhaps there’s a bug?

http://supervisor/core is the URL for the proxy that sits in front of HA. So it won’t work like until you supply it with a valid supervisor token.

Yes, you want to use something like those. should be what URL you use to connect to HA when you’re on your local network.

Yeah good thing I wrote down the token that was there before. I noticed that after I’ve made the changes the new token I put there stayed whenever I unchecked the Add-on option. All my nodes were then sitting at “Connecting”. I enabled the Add-on option back, but they were still stuck on connecting. Restarted NR and HA and still the same. Then I noticed the token was the new one I generated (which is longer) than the one it had before. After putting the old one back and setting the URL back to supervisor/core all the nodes were sitting at “running” rather than the state they had. After making a change that would trigger a state update they changed the status though.