TTS generating the voice (mp3) url for media players with the wrong port

Hi guys.
I’m runnning HA on port 8443 (both external and internal) but the url for the tts generated files sent to media player is on port 8123.
It looks like the tts integrations (I’ve tried with both Google and Piper) are generating the url using the default port instead using the HA configured port. Of course the media players can’t play the file as it is published on port 8443 and the player is looking on port 8123.
Has anyone seen the same behaviour?

An example of the Url sent to the media player (tried with Kodi and some other DLNA devices) is :

(http://10.1.1.nnn:8123/api/tts_proxy/1e4e888ac66f8dd41e00c5a7ac36a32a9950d271_el-gr_-_tts.piper.mp3)

My configuration.yaml is as follows:

default_config:
    internal_url: http://10.1.1.nnn:8443
    external_url: https://xxx.xxx.xxx:8443

http:
   server_port: 8443

Just for the sake of getting full info I’m running https for the external address with Ngnix addon and Cloudflare, but it shouldn’t be the cause of the issue as I’m playing tts only in the local network so I just need to have the right port when the generated file url is sent to the media player.
Thanks for your help.
Gerry

Somewhere, in some config, you haven’t changed your port. Locally, where the actually HA instance is run it should be 8123 but that has to be proxied if you’re running it from 8443. Depending on what other services you’re running at home you’d have DNS entries that would point to the correct server, and then that server would point to the correct port.

Thanks for replying. My question would be then: who is generating the file url sent to the media player? If is the tts remote/cloud service, I can understand that it may be reading the port from the DNS to generate the url. If it is instead generated by the HA tts integration, I would expect that no check against the DNS should be done as the HA integration should know on which port HA is running and where the cached file is published. I’ll have adouble check on Ngnix and Cloudflare configs.

8123 is coming from HA itself because that’s what it is by default. Whether or not the file is cached or not it still has to look somewhere and it thinks its pointing to 8123.

Well, I’m going to “grep” the HA code to see where potetially is coming from, otherwise I just may configure Nginx addon to forward all the calls coming on port 8123 to 8443, but that would just be a workaround.

Are you not proxying already?

only for external https, on 443. My issue is only for local addresses

Ok your config sounds really wonky. What should be done is your HA instance is running on 8123 and either you port forward to 8123 or you proxy to 8123. That’s why you’re getting errors with TTS.

Ideally you’d be running some kind of DNS so that internal and external URL point to the same location just for simplicity.

Unless there’s some major change I haven’t seen or don’t know about, I’d try making those changes and see if it fixes things for you.

I found the issue and sorted.
As I suspected it was something weird related to the HA cache or hidden config.
Grepping for 8123 in the ssh console, I found a hidden file in a hidden folder that had still 8123 port configured.
The file is

/homeassistant/.storage/core.config

that had the following configs

{
  "version": 1,
  "minor_version": 3,
  "key": "core.config",
  "data": {
    "latitude": blahhhblahhh,
    "longitude": blahhhblahhh,
    "elevation": 25,
    "unit_system_v2": "metric",
    "location_name": "Home",
    "time_zone": "Pacific/Auckland",
    "external_url": "https://my.public.domain:8123",
    "internal_url": "http://my.local.ip:8123",
    "currency": "NZD",
    "country": "NZ",
    "language": "en-GB"
  }
}

just changed the ports to 8443 , restarted HA and it works.
Don’t know what type of cache/config this is. It maybe some legacy stuff that is still there or it is just not in synch for whatever reason with configuration.yaml, and TTS integrations are picking up this value instead of the actual in configuration.yaml.
I won’t try to dig more as the issue has gone now. I hope this will help someone else who may have a similar issue.
Thanks for your help @forsquirel.

So glad you got it fixed!