Node-Red keeps crashing at home assistant restart

Hi guys,

Since a few days, my node-red is been having some problems. Whenever I restart home-assistant, it is not able to start node-red for some reason. In the log of node-red I see this:

14 Aug 10:15:05 - [red] Uncaught Exception:
14 Aug 10:15:05 - [error] UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<Object>".
[10:15:06] WARNING: Node-RED crashed, halting add-on
[10:15:06] INFO: Node-RED stoped, restarting...
s6-rc: info: service legacy-services: stopping
[10:15:06] INFO: Node-RED stoped, restarting...
s6-svwait: fatal: supervisor died
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Right before this there are also some entity API errors like this, but I don’t know if this causes the crash:

14 Aug 10:15:05 - [error] [ha-entity:*entity-name*] Entity API error. 

After this, I can just start the node-red add-on manually and it runs fine until I restart home-assistant again.
Does anyone have an idea if a faulty node could cause this, and if I can find this node somehow?

I deleted all my flows and I’m still getting this error:

14 Aug 15:12:21 - [red] Uncaught Exception:
14 Aug 15:12:21 - [error] UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "3".
[15:12:24] WARNING: Node-RED crashed, halting add-on
[15:12:24] INFO: Node-RED stoped, restarting...
s6-rc: info: service legacy-services: stopping
[15:12:25] INFO: Node-RED stoped, restarting...
s6-svwait: fatal: supervisor died
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
[15:12:25] INFO: nginx stoped, restarting...
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Remove the addon, rename the node-red folder, node-red.old. Reboot the host OS then reinstall.

1 Like

Have you recently updated node-red? I updated my container to the new V 3.x a few days ago and this broke my node-red. I have since rolled back to my previous working 2.x build as I did not have any success finding the problem.

Deleting and reinstalling unfortunately didn’t help. I still have the same problem.

Yes Node-Red has recently updated, but I have automatic updates enabled. I am now running at 13.3.1. How do I roll back to a previous version?

I am not certain of your setup. I am running node-red in a docker container with compose. I just pointed to the last container build that I still had on my system from before the update.
I am running 2.2.2 in docker. Looks like the latest is 3.0.2. The version 3 branch was released on July 14th according to this blog.

Same problem here. Seems Node-Red crashes every day around the same time as of 3 or 4 days ago.

Yeah, I can’t seem to keep it running either.

I’m seeing the same issue too. I think it started after the 2022.8 supervisor release for me. The add-on watchdog also doesn’t appear to be working since it’s not starting it back up after the crash. Looking through the changes on GitHub it appears there were a few changes made to how the watchdog works. No idea if it’s related or not.

I think it is somehow related to the new node-red v3 upgrade from v2 to v3. I suspect something in existing flows is causing an issue. Possibly the node-red-contrib-home-assistant-websocket node or other. I may try to disable my nodes then re-enable them and see if my container will start.
I have reverted back to V2.2.3 since the V3 will not start image

After reading these posts I had a mission to get this working on the latest build and I am happy to report some success. Here is what I did.

  1. With V2.2.3 running I accessed the Settings menu and noticed that some of my pallets were requesting updates.

  2. I selected the option to update each of the pallets to the requested versions
  3. After upgrade the three (3) pallets I restarted my node-red container
  4. Once I confirmed that these updates did not break my V2.2.3 node-red…
  5. I updated my docker stack to pull the latest node-red.
  6. This time the container successfully started and I am back up and running with the latest node-red
    :+1:
    image
    image
1 Like

Unfortunately, it did not help me

22 Aug 00:48:34 - [info] Node-RED version: v3.0.2
22 Aug 00:48:34 - [info] Node.js version: v16.16.0
22 Aug 00:48:34 - [info] Linux 5.15.55 x64 LE

Node Red keeps crashing, I don’t know why the hassio OS shell is so bad. I don’t know how to get to the docker like in a normal system, Node Red version 2 was like a tank - no problems

22 Aug 00:55:39 - [info] [server:Home Assistant] Connecting to http://supervisor/core
22 Aug 00:55:39 - [info] [server:Home Assistant] Connecting to http://supervisor/core
22 Aug 00:55:39 - [info] [server:Home Assistant] Connected to http://supervisor/core
22 Aug 00:55:39 - [info] [server:Home Assistant] Connected to http://supervisor/core
22 Aug 05:33:36 - [info] [cast-to-client:Głośnik Googla] volume changed to 50
22 Aug 05:33:37 - [red] Uncaught Exception:
22 Aug 05:33:37 - [error] TypeError: node.error is not a function
    at errorHandler (/opt/node_modules/node-red-contrib-cast/cast-to-client.js:52:10)
    at getSpeechUrl (/opt/node_modules/node-red-contrib-cast/cast-to-client.js:192:9)
    at Timeout._onTimeout (/opt/node_modules/node-red-contrib-cast/cast-to-client.js:836:33)
    at listOnTimeout (node:internal/timers:561:11)
    at processTimers (node:internal/timers:502:7)
[05:33:38] WARNING: Node-RED crashed, halting add-on
[05:33:38] INFO: Node-RED stoped, restarting...
s6-rc: info: service legacy-services: stopping
[05:33:38] INFO: Node-RED stoped, restarting...
s6-svwait: fatal: supervisor died
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
[05:33:38] INFO: nginx stoped, restarting...
s6-rc: info: service s6rc-oneshot-runner successfully stopped

The problem is still present for me. The same error pops up and node red crashes. I don’t think it is possible to be fixed.
The crash doesn’t happen very often. I think maybe 3 times a week at the moment and every time I restart home assistant. I can just easily restart node red and it runs fine for a long time. Also deploying doesn’t give any problems.
Does anyone know if it might be possible to make an automation to check if node red is running, and if it is not running, to start it via the automation? I suppose the watchdog is supposed to do this, but that doesn’t seem to be working.

2 Likes

Yes the Watchdog should be doing the job of restarting NodeRed but it doesn’t on mine too - after seeing a similar issue with my nodered I thought of ways to detect and restart it…

First thought was a simple bash script to check the addon info output:

root@ha01-pi:~# ha addon info a0d7b954_nodered | grep state:
state: started
root@ha01-pi:~#

Then I thought about it a bit more (what if HA is intentionally down etc etc ) so created a sensor in my configuration.yaml that uses the supervisor addon api endpoints (Endpoints | Home Assistant Developer Docs)

This sensor checks every 60 seconds, tailor it to your needs e.g. 5 minutes check ?

sensor:
  - platform: rest
    name: nodered_stats_api
    scan_interval: 60
    resource: http://supervisor/addons/a0d7b954_nodered/stats
    headers: 
      Authorization: !secret supervisor_bearer_key
      Content-Type: application/json
    value_template: "{{ value_json.result }}"
    json_attributes:
      - message
      - data

Add the supervisor_bearer_key into your the secrets.yaml from the environment variable SUPERVISOR_TOKEN, as seen inside your homeassistant container ( docker exec -ti homeassistant printenv SUPERVISOR_TOKEN ). for some unknown reason I couldn’t get it to work using !env_var SUPERVISOR_TOKEN , hence why I ended up CopyPasting the token as text into the secrets:

# get SUPERVISOR_TOKEN var from the homeassistant container...
supervisor_bearer_key: Bearer <Put your token here>

And quick check configuration in the Dev Option, and then a Restart - heypresto I have a sensor for NodeRed stats.

I tested stopping the nodered container and the state goes to ‘error’ with a message attribute displayed.

After that I created an automation, which basically checks the sensor has changed from state ‘ok’ for 2 minutes - again tailor it to your needs ( down for 10 minutes ? ) ,Here I call service to notify to my telegram channel, and then call service restarts the NodeRed container.

alias: Node Red Check Stats Sensor
description: ""
trigger:
  - platform: state
    entity_id:
      - sensor.nodered_stats_api
    from: ok
    for:
      hours: 0
      minutes: 2
      seconds: 0
condition: []
action:
  - service: notify.telegram_notify
    data:
      title: Node Red Issue
      message: Node Red container is stopped, the automation is restarting it.
  - service: hassio.addon_start
    data:
      addon: a0d7b954_nodered
mode: single

For me it does the job.

All in all a nice idea, but that doesn’t solve the problem of why node red stops

I solve it by removing from node red “hue magic” it was what was causing the error

I try to do the same because the problem seems to be coming from the Huemagic.
I’m only having trouble with this configuration node. As soon as I double click on it, my entire Node-Red crashes.

29 Aug 17:28:05 - [info] [hue-bridge:BRIDGE_NAME] Initializing the bridge (HUE_IPV4)…
29 Aug 17:28:08 - [info] [hue-bridge:BRIDGE_NAME] Error: getaddrinfo ENOTFOUND hue_ipv4
29 Aug 17:28:38 - [info] [hue-bridge:BRIDGE_NAME] Initializing the bridge (HUE_IPV4)…
29 Aug 17:28:42 - [info] [hue-bridge:BRIDGE_NAME] Error: getaddrinfo ENOTFOUND hue_ipv4
29 Aug 17:29:12 - [info] [hue-bridge:BRIDGE_NAME] Initializing the bridge (HUE_IPV4)…
29 Aug 17:29:15 - [red] Uncaught Exception:
29 Aug 17:29:15 - [error] TypeError: Converting circular structure to JSON
    --> starting at object with constructor 'Object'
    |     property 'httpsAgent' -> object with constructor 'Agent'
    |     property 'sockets' -> object with constructor 'Object'
    |     ...
    |     property 'errored' -> object with constructor 'Object'
    --- property 'config' closes the circle
    at JSON.stringify (<anonymous>)
    at stringify (/opt/node_modules/express/lib/response.js:1150:12)
    at ServerResponse.json (/opt/node_modules/express/lib/response.js:271:14)
    at ServerResponse.send (/opt/node_modules/express/lib/response.js:162:21)
    at /config/node-red/node_modules/node-red-contrib-huemagic/huemagic/hue-bridge-config.js:730:9
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
2022/08/29 17:29:16 [error] 386#386: *12 upstream prematurely closed connection while reading response header from upstream, client: xxx.xx.xx.x, server: a0d7b954-nodered, request: "GET /hue/name?ip=HUE_IPV4 HTTP/1.1", upstream: "http://xxx.x.x.x:46836/hue/name?ip=HUE_IPV4", host: "xxxxxx.duckdns.org:8123", referrer: "https://xxxxxx.duckdns.org:8123/api/hassio_ingress/kyu4fi_8h88uJwiieGHkuxRBmyEJV5WfucrHvJSyAeU/"
[17:29:16] WARNING: Node-RED crashed, halting add-on
[17:29:16] INFO: Node-RED stoped, restarting...
s6-rc: info: service legacy-services: stopping
[17:29:16] INFO: Node-RED stoped, restarting...
s6-svwait: fatal: supervisor died

Maybe this has something to do with the bigger problem, but I don’t know. I just press the node and press delete to delete it without having to open the node configuration.
Anyhow, I successfully removed the Huemagic palette and will update this post if the problem stops happening.

Yeah there definitely is an issue with the update and the companion. I too am running it in a docker and if I restart home assistant I get a crash. A timing combination of restarting both home assistant and node red docker containers yields success or fail. In my nodered docker logs I have

[error] UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<Object>".

Im going to roll back until a fix can be implemented.

Problem has not occurred anymore after removing huemagic.