Website down or up

I can’t find a component where it shows status of a website.

If there isn’t already one, I really need a to show if a website is up and running or it is down, so I can react on it with some automation.

Binary ping sensor easiest way:

Also possible with Command line sensor, then making script to check website

This won’t do…

This is for my router… it runs, but the build in webservice to manage the router dies.

So I would like, when the webservice dies to be able to restart the router, but it still answers the ping all the time.

With Command line sensor, possible to make any .sh script

For router, maybe you can connect from SSH and get response if webservice working?

You could probably use the scrape sensor

I tried to get SSH acces from my ISP, but that was a no go.

In my opinion a good choice would be to use a script that returns the response-code of a http request to the router. If you say the webservice crashes, then it will either not respond at all, or with something that’s not 200. Have a look at Pythons requests module to build a script which you can use to check the status.

You could potentially use a command line sensor or binary sensor. I have not tested the below at all, just wrote it as a starting point for you.

Edit: I don’t know if curl is an accepted command inside hass.io or not, i have never tested it myself.

binary_sensor:
  - platform: command_line
    command: curl -i http://192.168.0.1
    value_template: << PARSE OUT THE HTTP STATUS CODE, YOU ARE LOOKING FOR A HTTP 200 RESPONSE OR A HTTP AUTH REQUIRED CODE >>
    payload_on: 200
2 Likes

It’s not working for me. I seem to be able to show the binary sensor as “connected” but i just tested by turning one of the services OFF and it does not show as “disconnected” even after many minutes… any ideas on how to fix?

I couldn’t figure out how to use the value_template so I found a webpage that gave me a command to reduce the output to just the HTTP code. Using OSX in a terminal command I issued the CURL lines to figure out what each HTTP code was what for each service when it was live. Sometime “-I” works and other times “-i” worked for me.

I’ve used the code below:

binary_sensor:
  - platform: command_line
    command: curl -I http://192.168.86.61:9091/gui/ 2>/dev/null | head -n 1 | cut -d$' ' -f2
    name: 'Service 1'
    device_class: connectivity
    payload_on: 400
  - platform: command_line
    command: curl -I http://192.168.86.60:8989/ 2>/dev/null | head -n 1 | cut -d$' ' -f2
    name: 'Service 2'
    device_class: connectivity
    payload_on: 200
  - platform: command_line
    command: curl -I http://192.168.86.61:8096/ 2>/dev/null | head -n 1 | cut -d$' ' -f2
    name: 'Service 3'
    device_class: connectivity
    payload_on: 302
  - platform: command_line
    command: curl -i http://192.168.86.61:81 2>/dev/null | head -n 1 | cut -d$' ' -f2
    name: 'Service 4'
    device_class: connectivity
    payload_on: 302
1 Like

Have you seen this?

1 Like

I have to use this on my internal network… and I don’t think Uptime can work on my network.

Very interesting… My only initial concern is that you’re exposing the addresses and ports of all your services (hopefully not the logins and passwords), but this will expose this information to a 3rd party. Not a bad idea, but having Raspberry Pi do this for me means I don’t given this information to someone else to monetise.

I’m late to the party. If someone still needs this I used the following code:

binary_sensor:
  - platform: command_line
    command: curl -I http://192.168.192.205:8989/ > /dev/null 2>&1 && echo on || echo off
    name: 'Sonarr'
    device_class: connectivity
    payload_on: "on"
    payload_off: "off"
3 Likes

Will try this one… thanks

FYI I wrote a small integration to make this kind of thing a bit easier to setup (imho) for my own internally hosted websites a while ago. See this topic for more info: Custom Component: Websitechecker

2 Likes

I’m using this custom component currently:

Awesome. Thank You Michel.

I’m interested in this too. I use a monitoring service for work and it uses a scraping approach, to account for cases where the website doesn’t ‘go down’ with a 5xx code but you get an HTML error page instead (in my work this is often a Sucuri or Cloudflare page). The sites still respond to pings etc. Will Michel’s component or the healthchecks.io component account for these?

My integration just looks at HTTP response codes.Response code < 500 is OK or the other way around, HTTP request fail or response code >= 500 is Problem. I updated the readme to clarify that.

I would expect that Cloudflare or other services would still return/forward the proper response codes, just with a fancy page instead of a boring error (errorpages are also just normal pages, just with “error” content and a specific response code). It sounds like a really bad idea to convert error responses to “valid” responses. But I don’t have experience with Cloudflare or similar services, so I might be wrong.

If you need to look at the actual content of a page the earlier mentioned Scrape integration seems like a possible candidate (judging from the docs, did not use it myself)