Homeassistant on synology / connection timed out

Hi,
I’m trying to activate homeassistant on my synology 918+. I used the great tutorial How to Install Home Assistant on Your Synology NAS – Marius Hosting for this.
I can’t get past step 9. My_IP:8123 does not work, the browser returns “connection timed out”. Otherwise, I access synology via my_IP:5043. I am asking for access from the outside or through the domain, I just want to get HA up and running. Most likely it will be about enabling the port or something like that. Could someone please help me with this?

What does the log of the HA container say?

stream	content
stderr	e[31m2023-01-16 11:26:58.172 ERROR (Thread-6) [pychromecast.socket_client] [Tv(ip):8009] Failed to connect to service ServiceInfo(type='mdns', data='BRAVIA-2015-698e8b295ab3b1cd6b40099be980ea72._googlecast._tcp.local.'), retrying in 5.0se[0m 
stderr	s6-rc: info: service legacy-services successfully started
	
stderr	services-up: info: copying legacy longrun home-assistant (no readiness notification)
	
stderr	s6-rc: info: service legacy-services: starting
	
stderr	s6-rc: info: service legacy-cont-init successfully started
	
stderr	s6-rc: info: service legacy-cont-init: starting
	
stderr	s6-rc: info: service fix-attrs successfully started
	
stderr	s6-rc: info: service fix-attrs: starting
	
stderr	s6-rc: info: service s6rc-oneshot-runner successfully started
	
stderr	s6-rc: info: service s6rc-oneshot-runner: starting
	
stderr	s6-rc: info: service s6rc-oneshot-runner successfully stopped
	
stderr	s6-rc: info: service s6rc-oneshot-runner: stopping
	
stderr	s6-rc: info: service fix-attrs successfully stopped
	
stderr	s6-rc: info: service fix-attrs: stopping
	
stderr	s6-rc: info: service legacy-cont-init successfully stopped
	
stderr	s6-rc: info: service legacy-cont-init: stopping
	
stderr	s6-rc: info: service legacy-services successfully stopped
	
stderr	[09:02:27] INFO: e[32mHome Assistant Core service shutdowne[0m
	
stderr	[09:02:27] INFO: e[32mHome Assistant Core finish process exit code 0e[0m
	
stdout	Unable to find configuration. Creating default one in /config
	
stderr	s6-rc: info: service legacy-services: stopping
	
stderr	s6-rc: info: service legacy-services successfully started
	
stderr	services-up: info: copying legacy longrun home-assistant (no readiness notification)
	
stderr	s6-rc: info: service legacy-services: starting
	
stderr	s6-rc: info: service legacy-cont-init successfully started
	
stderr	s6-rc: info: service legacy-cont-init: starting
	
stderr	s6-rc: info: service fix-attrs successfully started
	
stderr	s6-rc: info: service fix-attrs: starting
	
stderr	s6-rc: info: service s6rc-oneshot-runner successfully started
	
stderr	s6-rc: info: service s6rc-oneshot-runner: starting

Is that the last of the rows that you see?
And you can see the config folder having been populated?

I deleted container and files from install folder and made it again. Now I have clear protocol:

home_assistant		
date	stream	content
16.01.23 12:11	stderr	s6-rc: info: service legacy-services successfully started
		
16.01.23 12:11	stderr	services-up: info: copying legacy longrun home-assistant (no readiness notification)
		
16.01.23 12:11	stderr	s6-rc: info: service legacy-services: starting
		
16.01.23 12:11	stderr	s6-rc: info: service legacy-cont-init successfully started
		
16.01.23 12:11	stderr	s6-rc: info: service legacy-cont-init: starting
		
16.01.23 12:11	stderr	s6-rc: info: service fix-attrs successfully started
		
16.01.23 12:11	stderr	s6-rc: info: service fix-attrs: starting
		
16.01.23 12:11	stderr	s6-rc: info: service s6rc-oneshot-runner successfully started
		
16.01.23 12:11	stderr	s6-rc: info: service s6rc-oneshot-runner: starting

But the problem remains.

My configuration.yaml contains:

# Loads default set of integrations. Do not remove.
default_config:

# Load frontend themes from the themes folder
frontend:
  themes: !include_dir_merge_named themes

# Text to speech
tts:
  - platform: google_translate

automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml

I just tried the same as per the link you sent (needed sudo though) from the command line, this works fine

sudo docker run -d --name=home_assistant \
-e TZ=Europe/Bucharest \
-v /volume1/docker/ha:/config \
--net=host \
--restart always \
homeassistant/home-assistant

And then I used http://192.168.1.131:8123 which brought me to the onboarding screen (the ip is the IP of my NAS). Could it be that you have a router that is blocking ip and/or port?
I myself cannot remember to have opened 8123 specifically on the NAS

my docker log show this:

Do you mean you are not asking for access from the outside? Fairly important, this.

Does Docker show the container as running in DSM, like this?

And you’re definitely trying to connect with http:// not https://?

I am using https://192.168.88.171:5043/ to manage NAS. It works great. But same ip with port for HA (https://192.168.88.171:8123 / http://192.168.88.171:8123) does not work (timeout).

I have microtik router. Do you know how to set it up or check if the port is blocked?

Same as me.

Yes, I am trying to connect from a local network from a computer that is connected by cable to the router to which Nas is also connected. I manage the NAS from the same computer on the same IP and different port. I would like to have HA access points from the outside as well in the future, but since I don’t want the rest of the NAS to have access points, it will probably be beyond my capabilities.

Yes, it appears the same to me, see screenshot.

Yes, I tried the http and https version. When managing the NAS, only the https version works for me, the http version returns e404.

So from what I read/see it seems to be running, did you install any firewall or other access restrictions?
Can you try to run this from anoter machine and see if it has port 8123 open:

nmap -Pn -p 8123 192.168.88.171

Interesting!

Starting Nmap 7.60 ( https://nmap.org ) at 2023-01-17 09:20 CET
Nmap scan report for 192.168.88.171
Host is up.

PORT     STATE    SERVICE
8123/tcp filtered polipo
Starting Nmap 7.60 ( https://nmap.org ) at 2023-01-17 09:20 CET
Nmap scan report for 192.168.88.171
Host is up (0.00031s latency).

PORT     STATE SERVICE
5043/tcp open  swxadmin

In firewal are these rules:

First allow:
Snímek obrazovky pořízený 2023-01-17 09-25-55

Second allow port 32400.

I enabled the port on the computer where I need access:

sudo ufw status verbose
Stav: aktivní
Přihlašování: on (low)
Výchozí: deny (příchozí), allow (odchozí), deny (směrované)
Nové profily: skip

Do                         Akce        Od
--                         ----        --
8123                       ALLOW IN    Anywhere                  
10080/udp                  ALLOW IN    Anywhere                  
22/tcp                     ALLOW IN    Anywhere                  
8123 (v6)                  ALLOW IN    Anywhere (v6)             
10080/udp (v6)             ALLOW IN    Anywhere (v6)             
22/tcp (v6)                ALLOW IN    Anywhere (v6)             

8123                       ALLOW OUT   Anywhere                  
10080/udp                  ALLOW OUT   Anywhere                  
8123 (v6)                  ALLOW OUT   Anywhere (v6)             
10080/udp (v6)             ALLOW OUT   Anywhere (v6)

But result is the same:

Starting Nmap 7.60 ( https://nmap.org ) at 2023-01-17 09:20 CET
Nmap scan report for 192.168.88.171
Host is up.

PORT     STATE    SERVICE
8123/tcp filtered polipo

So with me it shows:

PORT     STATE SERVICE
8123/tcp open  polipo

Noting that I can analyse things but I am not the expert on network/ports.
You could try to briefly shutdown the NAS firewall and see if that helps identifying that as the cause?

Wow, it was indeed a firewall on the NAS. The port must be explicitly allowed. Https does not work (SSL_ERROR_RX_RECORD_TOO_LONG), but http does.

Thanks a lot for your help :).

OK, well… a step in the right direction then. You then need to review how to add the other ports to allow https…but here I would recommend to use a proxy, either using the Mariushosting path or nginx… I use the syno proxy myself.