Thread/IPv6 issues with ZBT-2 and IKEA Timmerflotte/Alpstuga

Hi folks,

So I recently took the plunge to try introduce Thread to my HA setup, but am running into some difficulties, would greatly appreciate any advice/guidance here on troubleshooting.

A bit about my setup, been running HA as a VM in Proxmox for several years reliably(now on 15.2/2026.1.1). Matter has been also working well over Wi-Fi with several Meross plugs and Sengled bulbs, using ESPHome with BT proxies, WLED, some legacy Wemo stuff etc. HA runs in it’s own dedicate IOT VLAN/2.4 GHz Wi-Fi with all HA devices on the local network segment.

So recently decided to go with a ZBT-2(after reading positive reviews), using USB-2 port level passthrough in Proxmox to the VM, added OTBR(2.15.3) and Thread integrations with ZBT-2 being auto-detected correctly and latest firmware installed(OpenThread RCP 2.4.4.0) by HA. Also introduced IPv6 into the local IOT segment only with OPNSense gateway configured hopefully correctly for RAs.

Had an ESP32-C6 lying around, so flashed it with Openthread component and after some fiddling got it to work as a Thread Extender! (see screenshot


) The ESP32 connects to the PAN via it’s Thread radio and is ping-able on its IPv6 address from HA, so far so good.

Past weekend decide to pick up some of the newer IKEA Matter/Thread devices including, Timmerflotte(temp/hum sensor), Alpstuga (CO2), Bilresa(switch), several Myggspray (motion sensors), and a bunch of Mygebett(door/window sensors) figuring they should work with my thread setup.

So far I’ve tried dozens of times to connect the Timmerflotte(MTD) and Alpstuga(FTD) with it going to the Matter setup process after scanning the QR on my iPhone (on the same IOT Wi-Fi/VLAN) and have it fail every time. I’ve sent the credential to the phone etc, so i dont’ think that’s the problem.

Looking at OTBR logs, there are some obivious routing/reachbility issues. I started with dual NICs in the HA VM, but disabled one in Proxmox as I read OTBR will easily get confused with a 2nd NIC which was IPv4 only in any case. I have a Nest Hub (2nd gen), Apple TV(non-Thread version), but have tried with these turned off, same outcome. Below are the OTBR logs.

[19:20:03] INFO: Starting mDNS Responder...
Default: mDNSResponder (Engineering Build) (Dec 15 2025 09:14:53) starting
-----------------------------------------------------------
 Add-on: OpenThread Border Router
 OpenThread Border Router add-on
-----------------------------------------------------------
 Add-on version: 2.15.3
 You are running the latest version of this add-on.
 System: Home Assistant OS 15.2  (amd64 / qemux86-64)
 Home Assistant Core: 2026.1.1
 Home Assistant Supervisor: 2026.01.0
-----------------------------------------------------------
 Please, share the above information when looking for help
 or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service banner successfully started
s6-rc: info: service otbr-agent: starting
[19:20:19] INFO: Setup OTBR firewall...
[19:20:21] INFO: Migrating OTBR settings if needed...
2026-01-13 19:20:25 homeassistant asyncio[226] DEBUG Using selector: EpollSelector
2026-01-13 19:20:25 homeassistant zigpy.serial[226] DEBUG Opening a serial connection to '/dev/serial/by-id/usb-Nabu_Casa_ZBT-2_DCB4D910CB78-if00' (baudrate=460800, xonxoff=False, rtscts=True)
2026-01-13 19:20:25 homeassistant serialx.platforms.serial_posix[226] DEBUG Configuring serial port '/dev/serial/by-id/usb-Nabu_Casa_ZBT-2_DCB4D910CB78-if00'
2026-01-13 19:20:25 homeassistant serialx.platforms.serial_posix[226] DEBUG Configuring serial port: [0, 0, 2147486896, 0, 4100, 4100, [b'\x03', b'\x1c', b'\x7f', b'\x15', b'\x04', 0, 0, b'\x00', b'\x11', b'\x13', b'\x1a', b'\x00', b'\x12', b'\x0f', b'\x17', b'\x16', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00', b'\x00']]
2026-01-13 19:20:25 homeassistant serialx.platforms.serial_posix[226] DEBUG Setting low latency mode: True
2026-01-13 19:20:25 homeassistant serialx.platforms.serial_posix[226] DEBUG Setting modem pins: ModemPins[dtr rts]
2026-01-13 19:20:25 homeassistant serialx.platforms.serial_posix[226] DEBUG Setting TIOCMBIS: 0x00000006
2026-01-13 19:20:25 homeassistant zigpy.serial[226] DEBUG Connection made: <serialx.platforms.serial_posix.PosixSerialTransport object at 0x7fb89ecc78d0>
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Sending frame SpinelFrame(header=SpinelHeader(transaction_id=0, network_link_id=0, flag=2), command_id=<CommandID.RESET: 1>, data=b'\x02')
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Sending data b'~\x80\x01\x02\xea\xf0~'
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Immediately writing b'~\x80\x01\x02\xea\xf0~'
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Sent 7 of 7 bytes
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Event loop woke up reader
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Received b'~\x80\x06\x00p\xeet~'
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Decoded HDLC frame: HDLCLiteFrame(data=b'\x80\x06\x00p')
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Parsed frame SpinelFrame(header=SpinelHeader(transaction_id=0, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_IS: 6>, data=b'\x00p')
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Sending frame SpinelFrame(header=SpinelHeader(transaction_id=3, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_GET: 2>, data=b'\x08')
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Sending data b'~\x83\x02\x08\xbc\x9a~'
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Immediately writing b'~\x83\x02\x08\xbc\x9a~'
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Sent 7 of 7 bytes
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Event loop woke up reader
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Received b'~\x83\x06\x08\x982h\xff\xfe\xbdq\x16\xff\xcb~'
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Decoded HDLC frame: HDLCLiteFrame(data=b'\x83\x06\x08\x982h\xff\xfe\xbdq\x16')
2026-01-13 19:20:25 homeassistant universal_silabs_flasher.spinel[226] DEBUG Parsed frame SpinelFrame(header=SpinelHeader(transaction_id=3, network_link_id=0, flag=2), command_id=<CommandID.PROP_VALUE_IS: 6>, data=b'\x08\x982h\xff\xfe\xbdq\x16')
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Closing at the request of the application
2026-01-13 19:20:25 homeassistant zigpy.serial[226] DEBUG Waiting for serial port to close
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Closing connection: None
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Closing file descriptor 7
2026-01-13 19:20:25 homeassistant serialx.descriptor_transport[226] DEBUG Calling protocol `connection_lost` with exc=None
2026-01-13 19:20:25 homeassistant zigpy.serial[226] DEBUG Connection lost: None
Adapter settings file /data/thread/0_983268fffebd7116.data is the most recently used, skipping
[19:20:25] INFO: Starting otbr-agent...
[NOTE]-AGENT---: Running 0.3.0-b067e5ac-dirty
[NOTE]-AGENT---: Thread version: 1.3.0
[NOTE]-AGENT---: Thread interface: wpan0
[NOTE]-AGENT---: Radio URL: spinel+hdlc+uart:///dev/serial/by-id/usb-Nabu_Casa_ZBT-2_DCB4D910CB78-if00?uart-baudrate=460800&uart-flow-control
[NOTE]-AGENT---: Radio URL: trel://enp6s18
[NOTE]-ILS-----: Infra link selected: enp6s18
49d.17:04:34.235 [C] P-SpinelDrive-: Software reset co-processor successfully
00:00:00.115 [N] RoutingManager: BR ULA prefix: fd8b:6c6a:b802::/48 (loaded)
00:00:00.132 [N] RoutingManager: Local on-link prefix: fd73:f5dd:df1d:940::/64
s6-rc: info: service otbr-agent successfully started
s6-rc: info: service otbr-agent-configure: starting
00:00:00.256 [N] Mle-----------: Role disabled -> detached
00:00:00.390 [N] P-Netif-------: Changing interface state to up.
00:00:00.451 [W] P-Netif-------: Failed to process request#2: No such process
00:00:00.464 [W] P-Netif-------: Failed to process request#6: No such process
Done
00:00:00.907 [W] P-Daemon------: Failed to write CLI output: Broken pipe
s6-rc: info: service otbr-agent-configure successfully started
s6-rc: info: service otbr-agent-rest-discovery: starting
00:00:01.621 [W] P-Daemon------: Daemon read: Connection reset by peer
[19:20:29] INFO: Successfully sent discovery information to Home Assistant.
s6-rc: info: service otbr-agent-rest-discovery successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
00:00:27.754 [N] Mle-----------: RLOC16 c400 -> fffe
00:00:28.075 [N] Mle-----------: Attach attempt 1, AnyPartition reattaching with Active Dataset
00:00:34.575 [N] RouterTable---: Allocate router id 49
00:00:34.575 [N] Mle-----------: RLOC16 fffe -> c400
00:00:34.579 [N] Mle-----------: Role detached -> leader
00:00:34.587 [N] Mle-----------: Partition ID 0x2f418015
[NOTE]-BBA-----: BackboneAgent: Backbone Router becomes Primary!
00:00:35.843 [W] DuaManager----: Failed to perform next registration: NotFound
00:02:19.857 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:6cb9, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:02:19.857 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:19.857 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:19.857 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:6891, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:02:19.857 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:19.857 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:19.857 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:6491, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:02:19.858 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:19.858 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:20.648 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:6091, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:20.649 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:20.649 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:21.673 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:5c91, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:21.673 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:21.673 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:22.696 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:5891, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:22.697 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:22.697 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:24.745 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:5091, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:24.745 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:24.745 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:28.777 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:40d1, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:28.777 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:28.777 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
Default: mDNSPlatformSendUDP got error 99 (Cannot assign requested address) sending packet to ff02::fb on interface fe80::b0a4:dcff:fed9:999f/veth806b7c1/13
00:02:39.867 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:2011, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:02:39.867 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:39.867 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:02:53.545 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:e010, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:02:53.545 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:47250
00:02:53.545 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:03:21.864 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:d95e, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:03:21.864 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:45030
00:03:21.864 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:03:21.864 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:d56c, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:03:21.865 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:45030
00:03:21.865 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:03:21.865 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:d16c, ecn:no, sec:yes, error:AddressQuery, prio:low, radio:all
00:03:21.865 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:45030
00:03:21.865 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:03:22.600 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:cd6c, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:03:22.601 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:45030
00:03:22.601 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053
00:03:23.624 [N] MeshForwarder-: Dropping IPv6 TCP msg, len:80, chksum:c96c, ecn:no, sec:yes, error:Drop, prio:low, radio:all
00:03:23.627 [N] MeshForwarder-:     src:[fd8b:6c6a:b802:1:bb6e:2f4e:ef14:1cbf]:45030
00:03:23.627 [N] MeshForwarder-:     dst:[fd8b:6c6a:b802:1:3e9a:df68:f3b6:6362]:6053

Gemini tells me I have a ghost shard different to the one from ZBT-2, but Thread shows only one preferred Thread network. When I turn on the Nest hub, the second does appear, which currently is turned off.

Any thoughts on further troubleshooting/debugging to fix the issue would be much appreciated.

Many thanks

Your OTBR is probably fine.
Need more detail on what the failure is, and whether the Matter Logs reveal commissioning taking place (or not).
Make sure IPv6 is enabled in HA in “auto” mode.
Are you using iOS or Android for the HA Companion App?

Hi Tommy,

Thanks for your response, here is the Matter server logs during the initial pairing attempt by the Timmerflotte. HA IPv6 interface is set to auto.

2026-01-14 19:58:57.618 (MainThread) INFO [matter_server.server.device_controller.mdns] <Node:2> Discovered on mDNS
2026-01-14 19:58:57.620 (MainThread) INFO [matter_server.server.device_controller] <Node:2> Setting-up node...
2026-01-14 19:58:58.414 (MainThread) INFO [matter_server.server.device_controller] <Node:2> Setting up attributes and events subscription.
2026-01-14 19:59:00.763 (MainThread) INFO [matter_server.server.device_controller] <Node:2> Subscription succeeded with report interval [1, 60]

I’m using the IOS client and perform a factory reset of the Timmerflotte before each pairing attempt, but it continues to fail.

One thing I noticed however is that the local IPv6 gateway is pingable from another local LAN client as below, however this is not the case from the console in HA.

From another LAN laptop client on the same segment.

Narya:.ssh tsarath$ ping6 fd8b:6c6a:b802:1::1
PING6(56=40+8+8 bytes) fd8b:6c6a:b802:1:cc9:b2e6:3751:8442 --> fd8b:6c6a:b802:1::1
16 bytes from fd8b:6c6a:b802:1::1, icmp_seq=0 hlim=64 time=118.271 ms
16 bytes from fd8b:6c6a:b802:1::1, icmp_seq=1 hlim=64 time=164.504 ms
16 bytes from fd8b:6c6a:b802:1::1, icmp_seq=2 hlim=64 time=209.988 ms
16 bytes from fd8b:6c6a:b802:1::1, icmp_seq=3 hlim=64 time=3.163 ms
16 bytes from fd8b:6c6a:b802:1::1, icmp_seq=4 hlim=64 time=425.638 ms
^C
--- fd8b:6c6a:b802:1::1 ping6 statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 3.163/184.313/425.638/138.900 ms

While output from HA terminal is as follows:

➜  ~ ping6 fd8b:6c6a:b802:1::1
PING fd8b:6c6a:b802:1::1 (fd8b:6c6a:b802:1::1): 56 data bytes

While the ESP32 Thread extender is pingable from HA:

➜  ~ ping6 fdda:a3b0:1c85:eac:0:ff:fe00:1c00                   
PING fdda:a3b0:1c85:eac:0:ff:fe00:1c00 (fdda:a3b0:1c85:eac:0:ff:fe00:1c00): 56 data bytes
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=0 ttl=64 time=36.552 ms
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=1 ttl=64 time=27.230 ms
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=2 ttl=64 time=37.460 ms
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=3 ttl=64 time=33.508 ms
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=4 ttl=64 time=25.167 ms
64 bytes from fdda:a3b0:1c85:eac:0:ff:fe00:1c00: seq=5 ttl=64 time=28.217 ms
^C
--- fdda:a3b0:1c85:eac:0:ff:fe00:1c00 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 25.167/31.355/37.460 ms

I’m suspecting IPv6 routing between the thread network and the local LAN segment is broken, although not sure why this would interfere with the Timmerflotte failing to pair.

In the next day or two, I’m going to delete thread, the ZBT-2 and reinstall again as suggested by another thread i read just earlier to see if that helps.

Thanks

Tony

Just a thought … assuming that the ESP joined the Thread network of the OTBR, and that it is pingable from HA, then in turn HA should be able to talk to the Timmerflotte once it gets connected to the same Thread network.

Regarding the Matter Server log, there is no much there. I was looking for something in the log that says the Matter Server was going into commissioning mode. If something like that is not in the log, then need to understand what failure you are seeing.

Hi Tommy / all,

I got it working !! (see below :slight_smile: )

So the root cause was the IPv6 issue.

Yesterday I brought up another IPv6 VM on the same Proxmox host as HA and confirmed that IPv6 reachability/RA etc was working fine. The meant the issue was definitely in the HA VM and not in my network.

To resolve this I messed around with the HA LAN interface for over an hour today with various nmcli commands, no dice. After exhausting all possibilities, decided to delete thread and otbr, unplug the ZBT-2 and reboot the VM.

Following which the IPv6 OPNsense gateway became pingable from HA. Furthermore I could also now ping HA from my laptop client via IPv6 which was failing earlier, confirming that I had a functional IPv6 HA VM.

Subsequently, I reconnected the ZBT-2, added OTBR and Thread back and created a new Thread network and deleted the old one.

While the old Thread network was around IPv6 started failing again, but deleting the old and creating a new Thread network was the key step I believe, as IPv6 started working again on HA on the LAN segment, who knew.

I think what happened likely was I setup my Thread network initially prior to enabling IPv6 in my LAN segment in OPNSense, and there seems to be some conflict with the IPv6 addressing between the LAN and the Thread network, making the LAN interface in HA non-functional for IPv6.

In any case after resolving this issue, the Timmerflotte paired on the first try and was added to HA. I even upgraded the firmware on the Timmerflotte via HA, nice.

I’m now looking forward to connecting the rest of my Thread IKEA devices over the coming weekend. So lesson learnt is have a functional IPv6 network with HA first prior to adding your Thread network. :slight_smile:

Cheers

Tony

1 Like