Unable to Provision Matter devices

I’ve been trying for days to provision Matter devices (mainly Ikea Alpstuga), without success. I ran out of ideas.

The steps I followed:

  1. Factory reset the device.
  2. Connect phone to IoT Network (vlan)
  3. Sync the Thread Credentials
  4. Scan QRCode
  5. Go through the Connecting to Device stage
  6. Go through the “Generating Credentials” stage
  7. Get stuck at the “Checking connectivity to Thread network” stage

Sometimes it simply hangs on step 5 as well. I tend to Factory Reset and start over.

Things I already tried:

  • Clear the cache for Google Play Services
  • Retried synching Thread Credentials (always connected to IoT).
  • Rebooted everything

My setup Hardware Setup:

  • ISP: in bridge mode - IP v6 not supported
  • Router: Unifi UCG Ultra - VLAN enabled (more details below)
  • APs: Unifi U7 Pro and U6 Pro
  • Thread Boder Router: Home Assistant Connect ZBT-1
  • Zigbee Coordinator: Sky Connect for Zigbee running for a long time.
  • HA Platform: Mini PC + Proxmox + HA OS
  • Phone: Google Pixel 10 Pro (accessing HA through 2 servers, one for non Admin user, one for Admin user - using Admin for this).

HA Network Config:


Network Configs:


(both have the same Link Local address for some reason)


Firewall rules (IPv6):

Where Matter = Port 5540


What am I missing?


OTBR Logs:

-----------------------------------------------------------
 Add-on: OpenThread Border Router
 OpenThread Border Router add-on
-----------------------------------------------------------
 Add-on version: 2.15.3
 You are running the latest version of this add-on.
 System: Home Assistant OS 16.3  (amd64 / qemux86-64)
 Home Assistant Core: 2026.1.1
 Home Assistant Supervisor: 2026.01.0
-----------------------------------------------------------
 Please, share the above information when looking for help
 or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service banner successfully started
s6-rc: info: service otbr-agent: starting
[22:06:45] INFO: Setup OTBR firewall...
[22:06:45] INFO: Migrating OTBR settings if needed...
2026-01-12 22:06:45 ha asyncio[230] DEBUG Using selector: EpollSelector
2026-01-12 22:06:45 ha zigpy.serial[230] DEBUG Opening a serial connection to '/dev/serial/by-id/usb-Nabu_Casa_Home_Assistant_Connect_ZBT-1_<REMOVED>-if00-port0' (baudrate=460800, xonxoff=False, rtscts=True)
2026-01-12 22:06:45 ha serialx.platforms.serial_posix[230] DEBUG Configuring serial port '/dev/serial/by-id/usb-Nabu_Casa_Home_Assistant_Connect_ZBT-1_<REMOVED>-if00-port0'
2026-01-12 22:06:45 ha serialx.platforms.serial_posix[230] DEBUG Configuring serial port: [0, 0, 2147486896, 0, 4100, 4100, [b'\x03', b'\x1c', b'\x7f', b'\x15', b'\x04', 0, 0, b'\x00', b'\x11', b'\x13', b'\x1a', b'\x00', b'\x12', b'\x0f', b'\x17', b'\x16', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00']]
2026-01-12 22:06:45 ha serialx.platforms.serial_posix[230] DEBUG Setting low latency mode: True
2026-01-12 22:06:45 ha serialx.platforms.serial_posix[230] DEBUG Setting modem pins: ModemPins[dtr rts]
2026-01-12 22:06:45 ha serialx.platforms.serial_posix[230] DEBUG Setting TIOCMBIS: 0x00000006
2026-01-12 22:06:45 ha zigpy.serial[230] DEBUG Connection made: <serialx.platforms.serial_posix.PosixSerialTransport object at 0x7fafad82fa50>
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Sending frame SpinelFrame(header=SpinelHeader(transaction_id=0, network_link_id=0, flag=2), command_id=<CommandID.RESET: 1>, data=b'\x02')
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Sending data b'~\x80\x01\x02\xea\xf0~'
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Immediately writing b'~\x80\x01\x02\xea\xf0~'
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Sent 7 of 7 bytes
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Event loop woke up reader
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Received b'~\x80\x06\x00p\xeet~'
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Decoded HDLC frame: HDLCLiteFrame(data=b'\x80\x06\x00p')
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Parsed frame SpinelFrame(header=SpinelHeader(transaction_id=0, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_IS: 6>, data=b'\x00p')
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Sending frame SpinelFrame(header=SpinelHeader(transaction_id=3, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_GET: 2>, data=b'\x08')
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Sending data b'~\x83\x02\x08\xbc\x9a~'
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Immediately writing b'~\x83\x02\x08\xbc\x9a~'
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Sent 7 of 7 bytes
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Event loop woke up reader
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Received b'~\x83\x06\x08D\xe2}\xd8\xff\xfe\x8c\x8e\x85\x963~'
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Decoded HDLC frame: HDLCLiteFrame(data=b'\x83\x06\x08D\xe2\xf8\xff\xfe\x8c\x8e\x85')
2026-01-12 22:06:45 ha universal_silabs_flasher.spinel[230] DEBUG Parsed frame SpinelFrame(header=SpinelHeader(transaction_id=3, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_IS: 6>, data=b'\x08D\xe2\xf8\xff\xfe\x8c\x8e\x85')
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Closing at the request of the application
2026-01-12 22:06:45 ha zigpy.serial[230] DEBUG Waiting for serial port to close
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Closing connection: None
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Closing file descriptor 7
2026-01-12 22:06:45 ha serialx.descriptor_transport[230] DEBUG Calling protocol `connection_lost` with exc=None
2026-01-12 22:06:45 ha zigpy.serial[230] DEBUG Connection lost: None
Adapter settings file /data/thread/0_44e2f8fffe8c8e85.data is the most recently used, skipping
[22:06:45] INFO: Starting otbr-agent...
[NOTE]-AGENT---: Running 0.3.0-b067e5ac-dirty
[NOTE]-AGENT---: Thread version: 1.3.0
[NOTE]-AGENT---: Thread interface: wpan0
[NOTE]-AGENT---: Radio URL: spinel+hdlc+uart:///dev/serial/by-id/usb-Nabu_Casa_Home_Assistant_Connect_ZBT-1_<REMOVED>-if00-port0?uart-baudrate=460800&uart-flow-control
[NOTE]-AGENT---: Radio URL: trel://enp0s18
[NOTE]-ILS-----: Infra link selected: enp0s18
49d.17:03:19.475 [C] P-SpinelDrive-: Software reset co-processor successfully
00:00:00.053 [N] RoutingManager: BR ULA prefix: <REMOVED>::/48 (loaded)
00:00:00.053 [N] RoutingManager: Local on-link prefix: fd53:5afc:a5bd:c5c::/64
00:00:00.067 [N] Mle-----------: Role disabled -> detached
00:00:00.075 [N] P-Netif-------: Changing interface state to up.
00:00:00.081 [W] P-Netif-------: Failed to process request#2: No such process
00:00:00.081 [W] P-Netif-------: Failed to process request#6: No such process
s6-rc: info: service otbr-agent successfully started
s6-rc: info: service otbr-agent-configure: starting
[22:06:46] INFO: Enabling NAT64.
Done
00:00:00.275 [W] P-Daemon------: Failed to write CLI output: Broken pipe
00:00:00.275 [W] P-Netif-------: Failed to process request#7: No such process
Done
Done
s6-rc: info: service otbr-agent-configure successfully started
s6-rc: info: service otbr-agent-rest-discovery: starting
[22:06:46] INFO: Successfully sent discovery information to Home Assistant.
s6-rc: info: service otbr-agent-rest-discovery successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
00:00:25.844 [N] Mle-----------: RLOC16 8800 -> fffe
00:00:26.523 [N] Mle-----------: Attach attempt 1, AnyPartition reattaching with Active Dataset
00:00:33.023 [N] RouterTable---: Allocate router id 34
00:00:33.023 [N] Mle-----------: RLOC16 fffe -> 8800
00:00:33.026 [N] Mle-----------: Role detached -> leader
00:00:33.026 [N] Mle-----------: Partition ID 0x1201a1eb
[NOTE]-BBA-----: BackboneAgent: Backbone Router becomes Primary!
00:00:37.973 [W] DuaManager----: Failed to perform next registration: NotFound
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::5e:XXXX:XXXX:eab3/veth3783ae9/16
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::4487:XXXX:XXXX:1853/veth474037f/15
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::9cba:XXXX:XXXX:4aeb/veth7e241fc/17
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0d9:XXXX:XXXX:e520/veth598aace/18
// --> Here I tried to provision the device - no further logs appeared

Matter Server Logs:

-----------------------------------------------------------
 Add-on: Matter Server
 Matter WebSocket Server for Home Assistant Matter support.
-----------------------------------------------------------
 Add-on version: 8.1.2
 You are running the latest version of this add-on.
 System: Home Assistant OS 16.3  (amd64 / qemux86-64)
 Home Assistant Core: 2026.1.0
 Home Assistant Supervisor: 2026.01.0
-----------------------------------------------------------
 Please, share the above information when looking for help
 or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service banner successfully started
s6-rc: info: service matter-server: starting
s6-rc: info: service matter-server successfully started
s6-rc: info: service legacy-services: starting
[01:52:25] INFO: Starting Matter Server...
s6-rc: info: service legacy-services successfully started
[01:52:26] INFO: Using 'enp0s18' as primary network interface.
[01:52:26] INFO: Successfully send discovery information to Home Assistant.
2026-01-12 01:52:28.569 (MainThread) INFO [matter_server.server.stack] Initializing CHIP/Matter Logging...
2026-01-12 01:52:28.569 (MainThread) INFO [matter_server.server.stack] Initializing CHIP/Matter Controller Stack...
[1768182748.733405][117:117] CHIP:CTL: Setting attestation nonce to random value
[1768182748.739536][117:117] CHIP:CTL: Setting CSR nonce to random value
[1768182748.762099][117:117] CHIP:DL: ChipLinuxStorage::Init: Using KVS config file: /tmp/chip_kvs
[1768182748.767195][117:117] CHIP:DL: Wrote settings to /tmp/chip_kvs
[1768182748.767673][117:117] CHIP:DL: ChipLinuxStorage::Init: Using KVS config file: /data/chip_factory.ini
[1768182748.772403][117:117] CHIP:DL: ChipLinuxStorage::Init: Using KVS config file: /data/chip_config.ini
[1768182748.772957][117:117] CHIP:DL: ChipLinuxStorage::Init: Using KVS config file: /data/chip_counters.ini
[1768182748.782355][117:117] CHIP:DL: Wrote settings to /data/chip_counters.ini
[1768182748.782397][117:117] CHIP:DL: NVS set: chip-counters/reboot-count = 35 (0x23)
[1768182748.782872][117:117] CHIP:DL: Got Ethernet interface: enp0s18
[1768182748.783074][117:117] CHIP:DL: Found the primary Ethernet interface:enp0s18
[1768182748.783425][117:117] CHIP:DL: Failed to get WiFi interface
[1768182748.783430][117:117] CHIP:DL: Failed to reset WiFi statistic counts
[1768182748.783434][117:117] CHIP:PAF: WiFiPAF: WiFiPAFLayer::Init()
2026-01-12 01:52:28.784 (MainThread) INFO [chip.storage] Initializing persistent storage from file: /data/chip.json
2026-01-12 01:52:28.784 (MainThread) INFO [chip.storage] Loading configuration from /data/chip.json...
2026-01-12 01:52:28.883 (MainThread) INFO [chip.CertificateAuthority] Loading certificate authorities from storage...
2026-01-12 01:52:28.884 (MainThread) INFO [chip.CertificateAuthority] New CertificateAuthority at index 1
2026-01-12 01:52:28.885 (MainThread) INFO [chip.CertificateAuthority] Loading fabric admins from storage...
2026-01-12 01:52:28.885 (MainThread) INFO [chip.FabricAdmin] New FabricAdmin: FabricId: <REMOVED>, VendorId = <REMOVED>
2026-01-12 01:52:28.885 (MainThread) INFO [matter_server.server.stack] CHIP Controller Stack initialized.
2026-01-12 01:52:28.885 (MainThread) INFO [matter_server.server.server] Matter Server initialized
2026-01-12 01:52:28.885 (MainThread) INFO [matter_server.server.server] Using 'enp0s18' as primary interface (for link-local addresses)
2026-01-12 01:52:28.886 (MainThread) INFO [matter_server.server.server] Starting the Matter Server...
2026-01-12 01:52:28.889 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetching the latest PAA root certificates from DCL.
2026-01-12 01:52:31.195 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetched 72 PAA root certificates from DCL.
2026-01-12 01:52:31.196 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetching the latest PAA root certificates from Git.
2026-01-12 01:52:31.807 (MainThread) INFO [matter_server.server.helpers.paa_certificates] Fetched 2 PAA root certificates from Git.
2026-01-12 01:52:31.808 (MainThread) INFO [chip.FabricAdmin] Allocating new controller with CaIndex: 1, FabricId: <REMOVED>, NodeId: <REMOVED>, CatTags: []
2026-01-12 01:52:31.853 (MainThread) INFO [matter_server.server.vendor_info] Loading vendor info from storage.
2026-01-12 01:52:31.856 (MainThread) INFO [matter_server.server.vendor_info] Loaded 383 vendors from storage.
2026-01-12 01:52:31.856 (MainThread) INFO [matter_server.server.vendor_info] Fetching the latest vendor info from DCL.
2026-01-12 01:52:32.084 (MainThread) INFO [matter_server.server.vendor_info] Fetched 385 vendors from DCL.
2026-01-12 01:52:32.085 (MainThread) INFO [matter_server.server.vendor_info] Saving vendor info to storage.
2026-01-12 01:52:32.088 (MainThread) INFO [matter_server.server.device_controller] Loaded 0 nodes from stored configuration
2026-01-12 01:52:32.092 (MainThread) INFO [matter_server.server.server] Matter Server successfully initialized.
2026-01-12 01:52:32.092 (MainThread) INFO [matter_server.server.server] Matter Server successfully initialized.
<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>openresty</center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->

Installed HA Add-Ons:

:grey_exclamation: Notice some are disabled, like Home-Assistant-Matter-Hub.


Thread Hardware:


Matter:


OTBR:


Thread:


:exclamation: I redacted lots of values in the logs and screenshots. I’m paranoid! :smiley:

Thanks a lot and sorry for the huge post. Just trying to avoid multiple round-trips.

Matter device, provisioning device (your phone), HA server and matter server must be on same vlan and WiFi network

Basically don’t try getting this across vlans

WiFi because there’s nothing allowing you to choose, it just joins the current.

That’s really difficult for me.

I don’t trust IoT devices to be in the Trusted VLAN as it provides access to lots of sensitive things I want to keep tight control over.

There must be a way to do it, as I’ve read around. I just don’t know what I’m doing wrong. I suppose it is something Firewall related.

At the moment:

  • Device → Should join IoT
  • Phone → In IoT when provisioning the device
  • HA server → In Trusted (but should be reachable)
  • Matter server → In Trusted as it runs in the HA server

I’ve read HA server should not be multi homed hence I discarded the option of creating a second Network Interface in Proxmox and configuring it to be in the IoT vlan.

I have other devices in IoT that are integrated with the HA server (ESPHome devices, etc). Although all using IPv4 and probably not using mDNS.

Changing this would be 1) really time consumind; 2) Insecure.

I’m no expert on this but since your ‘Trusted’ VLAN has a gateway of:

I think your IoT network need the same Gateway IPv6 address. ie: fd10::01 instead of fd21:01

It is not configurable. It will use exactly the same address as entered in the IPv6 Address.

Maybe I’m using an invalid value here, though. I’m not that experienced with IPv6.

Initially I thought I’d put fd10::, but that’d be an invalid address for both, right?

I wonder if using a domain instead of IP for the server URL make any difference.
I use https://ha.<redacted>.xyz/ for both external and internal URLs. This works as my router has hairpin firewall rules to handle that correctly.

My Router only forwards ports 80/443 to Nginx Proxy Manager, which proxies a few services behind subdomain:

  • https://ha.<something>.xyz → HA
  • https://plex.<something>.xyz → Plex

For this reason, my HA Config contains:

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - ::1
    - 10.0.10.11
  ip_ban_enabled: true
  login_attempts_threshold: 5

I wonder if that is the broken chain as the device might try to reach out to HA using the internal URL, but there is only IPv4 and no IPv6 it will resolve to. Even though HA would accept all forwarded proxy requests on IPv6 as well.


Edit: Just tried with http://<IP v4>:8123 as internal URL. Same behaviour unfortunately. So it doesn’t seem to make a difference.

I expected it wouldn’t as the device should use mDNS with a link local IPv6 address to find out what’s the server IP. If I understand it correctly, no URL is used for matter.

Maybe it is worth mentioning that at random times, my phone is showing the alternative provisioning flow. I suppose it is because the device is broadcasting its presence using BLE.

I didn’t try to provision it this way, though as I read I should favour the HA Companion App approach.


EDIT: Tried now. Same behaviour after selecting Home Assistant.

Some more logs showed up for OTBR:

<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>openresty</center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->

I just had a look at mine and yeah, they are different between the trusted and IoT VLANs. I haven’t yet setup any Matter over Wi-Fi devices, only Matter over Thread and my Thread router is on my trusted VLAN. (I have some Matter over Wi-Fi devices ready to setup next week so will see if they are happy on my IoT VLAN or not soon)

1 Like

I just realised Ikea Alpstuga is a Thread device, not Wifi. It should reach the ZBT-1 border router directly.

So the network discussion shouldn’t be relevant.

Or an I missing something?

If it’s valid https url it may be ok

I changed this to true internal/external because ha voice assistant uses local url for serving media (I hate this and wish it was configurable)

Matter doesn’t use this I don’t believe but it’s picky about all devices being on same network. I don’t use thread so not sure what that adds / subtracts in this regard

1 Like

Good point. I started using internal now as well to anticipate this scenario.

Didn’t make any difference for sake of the provisioning, though.