Hassio supervisor: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused]

Hello everyone,

Yesterday I restarted my hassio instance and didn’t get it back up. Looking into the log’s i found the errors described in the topic.

I’ve tried to reinstall hassio all together (disabling services, completely removing the docker files and reinstall using Frenck’s code.) A fresh instance of hassio does load and I can configure it.
When I restore my snapshot it stops working again. I’ve tried ‘wipe and restore’ and also ‘restore selected’.
When I use ‘restore selected’ all works fine, no errors in the ‘check config’ show up, but when I restart home assistant it breaks again.
Home assistant does seem to run (samba is accessible).

Hassio is running the latest version inside docker on Ubuntu on a NUC.

Supervisor log:

19-09-03 14:08:54 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:08:54 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:13:59 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:13:59 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:19:04 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:19:04 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:24:09 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:24:09 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:28:52 INFO (MainThread) [hassio.store.git] Update add-on https://github.com/sabeechen/hassio-google-drive-backup repository,
19-09-03 14:28:52 INFO (MainThread) [hassio.store.git] Update add-on https://github.com/home-assistant/hassio-addons repository,
19-09-03 14:28:52 INFO (MainThread) [hassio.store.git] Update add-on https://github.com/hassio-addons/repository repository,
19-09-03 14:28:53 INFO (MainThread) [hassio.store] Load add-ons from store: 61 all - 0 new - 0 remove,
19-09-03 14:29:14 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:29:14 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:34:20 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:34:20 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,
19-09-03 14:39:25 ERROR (MainThread) [hassio.homeassistant] Error on call https://172.30.32.1:8123/api/states/binary_sensor.snapshots_stale: Cannot connect to host 172.30.32.1:8123 ssl:None [Connection refused],
19-09-03 14:39:25 ERROR (MainThread) [hassio.api.proxy] Error on API for request states/binary_sensor.snapshots_stale,

Home assistant log:

2019-09-03 17:43:03 ERROR (MainThread) [homeassistant.components.hassio.handler] Client error on /supervisor/options request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed ('172.30.32.2', 80)]
2019-09-03 17:43:04 WARNING (MainThread) [homeassistant.components.sensor] Platform tautulli not ready yet. Retrying in 30 seconds.
2019-09-03 17:43:05 ERROR (MainThread) [homeassistant.components.hassio.handler] Client error on /homeassistant/info request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed ('172.30.32.2', 80)]
2019-09-03 17:43:05 WARNING (MainThread) [homeassistant.components.hassio] Can't read last version: 
2019-09-03 17:43:05 ERROR (MainThread) [homeassistant.components.hassio.handler] Client error on /ingress/panels request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed ('172.30.32.2', 80)]
2019-09-03 17:43:05 ERROR (MainThread) [homeassistant.components.hassio.addon_panel] Can't read panel info: 
2019-09-03 17:43:06 ERROR (MainThread) [homeassistant.components.sensor] postnl: Error on device update!
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "/usr/local/lib/python3.7/socket.py", line 748, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Try again

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 344, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 843, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 316, in connect
    conn = self._new_conn()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 169, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7ffb42f15dd0>: Failed to establish a new connection: [Errno -3] Try again

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='jouw.postnl.nl', port=443): Max retries exceeded with url: /mobile/api/letters/%5E1%5E550%5E20190826161234286 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffb42f15dd0>: Failed to establish a new connection: [Errno -3] Try again'))

Config.yaml:

homeassistant:
  customize: !include customize.yaml
  auth_providers:
   - type: homeassistant
   - type: legacy_api_password
     api_password: !secret http_password
   - type: trusted_networks
     trusted_networks:
       - 192.168.1.0/24
  whitelist_external_dirs:
   - "/config/tts"

hassio:

#Home assistant cloud (Nabu Casa)
cloud:
  
# Enables the frontend
frontend:
  themes: !include themes.yaml
  javascript_version: latest
  extra_html_url:
    - /local/custom_ui/state-card-floorplan.html

#lovelace:
#  mode: yaml

system_health:
    
# Enables configuration UI
config:

http:
  base_url: !secret base_url
  ssl_certificate: /ssl/fullchain-duckdns.pem
  ssl_key: /ssl/privkey-duckdns.pem
  cors_allowed_origins:
    - https://cast.home-assistant.io

updater:
  # Optional, allows Home Assistant developers to focus on popular components.
  include_used_components: true

#Home Assistant Control Station (HACS)
#hacs:
#  token: !secret hacs_token

# Discover some devices automatically
discovery:
  ignore:
    - igd

# Enables support for tracking state changes over time
history:

# View all events in a logbook
logbook:

#HASS DB
recorder:
  db_url: !secret db_mysql
  purge_keep_days: 90
 
# Track the sun
sun:

#WOL Service
wake_on_lan:  
  
# Text to speech
tts:
  - platform: google_translate
    service_name: google_say
    language: nl
    base_url: !secret base_url
    cache: true
    cache_dir: /config/tts
    time_memory: 300
    
#configurator
panel_custom:
  - name: Floorplan
    sidebar_title: Floorplan
    sidebar_icon: mdi:home
    url_path: floorplan
    config: !include floorplan.yaml

#MQTT 
mqtt:
  broker: 172.17.0.1
  port: 1883
  discovery: true
  username: !secret mqtt_username
  password: !secret mqtt_password
  
#Media players
media_player:
  - platform: spotify
    client_id: !secret spotify_client_id
    client_secret: !secret spotify_client_secret 
    
#IFTTT
ifttt:
  key: !secret ifttt_key

#Device Trackers
device_tracker:
# TADO (HASSIO)
  - platform: tado
    username: !secret amazon_user
    password: !secret tado_password
    home_id: !secret tado_home_id
#NMAP
  - platform: nmap_tracker
    hosts: 192.168.1.50-200
    home_interval: 10
    exclude:
     - 192.168.1.1
    scan_options: " --privileged -sP "
    interval_seconds: 30
    new_device_defaults:
      track_new_devices: true
      hide_if_away: true
#Bluetooth
  - platform: bluetooth_tracker
    interval_seconds: 30
    new_device_defaults:
      track_new_devices: true
      hide_if_away: true
  - platform: bluetooth_le_tracker
    interval_seconds: 30
    new_device_defaults:
      track_new_devices: true
      hide_if_away: true
    
#SIMULATED
  - platform: mqtt
    devices:
      babysitter: 'smartthings/Oppas'
    
#Lights
light: !include config_lights.yaml
yeelight: !include config_yeelight.yaml

switch: !include config_switches.yaml
    
#Sensors
sensor: !include config_sensors.yaml
  
#Sensors
binary_sensor: !include config_binary_sensors.yaml

#Groups
group: !include groups.yaml

#Automations
automation: !include automations.yaml

#Scripts
script: !include scripts.yaml

#Persons
person: !include config_person.yaml

#RESTful commands
rest_command: !include config_REST.yaml

### Bedroom TV Commands ###
shell_command:
  bedroom_tv_turn_off: 'curl -X POST -H "Content-Type: application/json" -d "{ \"key\": \"Standby\" }" http://192.168.1.108:1925/1/input/key'

### Xiaomi Vacuum ###
vacuum:
  - platform: xiaomi_miio
    host: 192.168.1.249
    token: !secret monica_token
    name: Monica

input_select:
  vacuum_room:
    name: Choose a room to clean
    options:
      - Select Input
      - Dining Room
      - Hallway
      - Kitchen
      - Living Room
      - Bunnies
    initial: Select Input
### Xiaomi Vacuum ###

# Thermostaat #
tado:
  username: !secret amazon_user
  password: !secret tado_password

# NEST Account
nest:
  client_id: !secret nest_client_ID
  client_secret: !secret nest_secret

# Weer door Buienradar
weather:
  - platform: buienradar
    name: 'hoogland'
  
#Inputs
var:
  monica_last_cleaning:
    restore: true
  monica_last_zone:
    restore: true
  simulate_presence:
    restore: false
  test_value:
    restore: false
    initial_value: 0
  tmp:
    restore: false
    initial_value: 0
    #Wordt gebruikt voor tijdelijke value's in o.a. alarmsysteem.
  phone_notify:
    restore: true
  rova:
    restore: true
    icon: 'mdi:delete'
  postnl_package:
    restore: true
    friendly_name: 'PostNL Pakket'
    value_template: >-
      {% if states.sensor.postnl_delivery.state|int > 0 %}
      {% set s= states.sensor.postnl_delivery.attributes.shipments[0]['status']['formatted']['short'] %}
      {% set t1 = (s|regex_findall_index('time:[^T]*T[^+-]*',0)).replace('time:','') %}
      {% set d = t1.split('T')[0] %}
      {% set t1 = t1.split('T')[1].rsplit(':',1)[0] %}
      {% set t2 = (s|regex_findall_index('time:[^T]*T[^+-]*',1)).replace('time:','').split('T')[1].rsplit(':',1)[0] %}
      Er is {{ states.sensor.postnl_delivery.state }} pakketje onderweg van {{ states.sensor.postnl_delivery.attributes.shipments[0]['title'] }}. Bezorgtijd op {{ d }} tussen {{ t1 }} en {{ t2 }}.
      {% else %}
      Er wordt geen pakket verwacht.
      {% endif %}
    icon: 'mdi:email'
input_boolean:
  vacuum_ready:
    name: Monica cleaned while away
  backdoor_light_auto_on:
    name: Backdoor turned on lights
input_number:
  pokon_latest:
    name: Plant pokon laatste melding
    initial: 250
    min: 0
    max: 250
    step: 1
  water_latest:
    name: Plant water laatste melding
    initial: 100
    min: 0
    max: 100
    step: 1 
         
#Camera's
camera: !include config_cameras.yaml

#Alarmsysteem
alarm_control_panel:
  platform: manual
  name: Thuis
  code: !secret alarm_code
  code_arm_required: false
  pending_time: 0

#Enable streaming
stream:
    
#For calendar
google:
  client_id: !secret google_client_id
  client_secret: !secret google_client_secret

Hope someone can help me out. :pleading_face:

When I stop the ‘Google drive backup addon’ in portainer, the error dissapears, but Hassio is still unreachable through the web interface.

did you solve this, having the same problem

Unfortunately no.
I did a fresh install.

Ugh, I’m running into the exact same issue. It all started with hassio_supervisor image was updated with the latest :latest image.

What I don’t get is:

2020-01-02 11:02:42 ERROR (MainThread) [homeassistant.components.hassio.handler] Client error on /supervisor/options request Cannot connect to host 172.30.32.2:80 ssl:None [Connect call failed ('172.30.32.2', 80)]

172.30.32.2 is my DNS image, not supervisor. I don’t know why it’s trying to hit :80 on hassio_dns. It seems like IPs got mixed up somewhere. I’m trying to figure that one out.

Actually, not anymore. I see what’s going on. IPs have changed, but they are not updated in the images for some reason:

From within supervisor:

# Log:
20-01-02 18:07:02 INFO (MainThread) [hassio.api] Start API on 172.30.32.2

# Within the image
pz@hermes:~$ sudo docker exec -it hassio_supervisor bash
bash-5.0# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04  
          inet addr:172.17.0.4  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:64 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:16800 (16.4 KiB)  TX bytes:2665 (2.6 KiB)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:1E:21:00  
          inet addr:172.30.33.0  Bcast:172.30.33.255  Mask:255.255.254.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:87 errors:0 dropped:2 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13679 (13.3 KiB)  TX bytes:437 (437.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:386 (386.0 B)  TX bytes:386 (386.0 B)

The container is no longer on that IP. That’s why it’s failing…

FYI: All this broke because IP Addresses of containers changed. And the host IPs are hard coded in the install.

I don’t know why or what the networking setup should be. But I can tell you this is why these things are happening.

PS: To fix all this, the easiest solution I found was - delete all running containers, all of them. Then restart your box. Service registry will restart everything back the way it should be, including IP addresses.

Worked for me, at least. Strange a straight host restart didn’t seem to fix it, either. Docker must have stored something in its registry of existing running containers.

1 Like

I think I have something very similar happening to me here, it’s been driving me crazy for a few weeks now.

I was originally running Hassio on Ubuntu but moved over to Docker on a Synology DiskStation. I’ve had to move shared folders around on the Synology to get to the right underlying filesystem (ext4 to btrfs) and I think somewhere along the way things have become mixed up with the connectivity and containers. I can only assume that the containers that are running have IPs which don’t match what is expected in terms of the Networking.

A note on this comment of mine: Docker does retain containers on host restart. This is why you have to delete all existing containers and then let everything restart itself.

1 Like

Gday,

Im having this issue on a fresh install. Would you mind outlining theprocedure to delete the containers? Ive not had to do this before.

1 Like

Me too. What containers are you referring to? I am on HASSIO on Pi.

hi all,

having a simelar problem to this
I made some changes in the traefik file for some other containers and did a “docker compose up -d”
now all the internal ip address changed, so i can’t access hassio, visual code, configurator, … , all the extra created containers by home assistant.

is there a way to change those ip addresses or set them as fixed?

kind regards

nervermind, did a container restart of hassio and it linked again to home assistant

I just created a new instance. New VM, freshly downloaded image no add-ons installed and I still see this error in my logs.
I am just going to ignore it.

How exactly did you do that? No one explain that.

docker restart <container name>

hi all.

I have same issue. Anyone opened a issue on github already?

i use portainer so i can do in there, if not then use what ieatacid said
'docker restart ’ or even 'docker start " in console
maybe your container isn’t running

I’m having a similar issue in my mqtt add on which i had to switch to because hive hqtt is being depreciated. But who’s IP address is this, it’s a public IP address from LA it seems. Could it be nabucasa?
New client connected from 172.30.32.1 as as auto-xxxxxxx (p2, c1, k60, u’homeassistant’)
That username is different that the one I set, but it may be imported from homeasstant users.

Just worried about security.
Thanks

2 Likes

I only noticed because my MQTT refused any connections.
Thanks to the hints above, I just had to restart the container for supervisor. It then restarted a few times because it auto-updated itself. And then I restarted the addons through the UI

to see all containers on the shell:
docker ps
then find the name of the container which has an image that is named similarly to “homeassistant/armv7-hassio-supervisor” and restart it with
docker restart 77edf8e460ff

2 Likes