HomeKit Accessory Protocol (HAP) over CoAP/UDP (was: Nanoleaf Essentials bulb via Thread/CoAP)

@setomerza really glad to hear it. While things are working, can you grab some network info and set it aside? We’ll compare later states to it if things stop working again. From HA, run:

  • ip -6 a
  • ip -6 r
  • ip -6 n

@DaJonas thanks for testing. Can you please turn on debug logs for aiohomekit, restart HA, and DM them to me? You can use the File Editor or Visual Studio Code add-ons, or just modify configuration.yaml directly on the console. Add the following or modify an existing logger: block:

logger:
  default: info
  logs:
    aiohomekit: debug

@lambdafunction
Thanks for pointing me in the right direction, it turns out aioairctrl used by philips-airpurifier-coap had aiocoap 0.4.1 as a dependancy, I forked the repo, updated the library and now my HA is using 0.4.4 and the Nanoleaf bulbs are working as expected!! Thanks again!!

1 Like

Thanks again for all the help!

ip -6 a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fd45:7a0a:5395:42f1:11b5:f444:ec4e:8451/64 scope global dynamic noprefixroute
       valid_lft 1776sec preferred_lft 1776sec
    inet6 fe80::dc7b:1455:3c4:1927/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
    inet6 fe80::42:6fff:fec9:80de/64 scope link
       valid_lft forever preferred_lft forever
5: hassio: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
    inet6 fe80::42:34ff:fe76:2149/64 scope link
       valid_lft forever preferred_lft forever
7: vethb9bee09@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::ec5f:86ff:feb1:6641/64 scope link
       valid_lft forever preferred_lft forever
9: veth491ffd6@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::f041:b7ff:fe49:cdea/64 scope link
       valid_lft forever preferred_lft forever
11: veth21820b3@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::dc42:17ff:fe3a:8685/64 scope link
       valid_lft forever preferred_lft forever
13: veth9a365a7@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::3892:41ff:fed4:47dd/64 scope link
       valid_lft forever preferred_lft forever
15: veth264e13e@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::448:a4ff:fe8f:a608/64 scope link
       valid_lft forever preferred_lft forever
17: veth8b5dfb0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::a03d:19ff:fe95:6a49/64 scope link
       valid_lft forever preferred_lft forever
19: veth1078d11@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::5c7d:13ff:fe71:6150/64 scope link
       valid_lft forever preferred_lft forever
21: veth01d7185@if20: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::a093:7dff:fec4:953f/64 scope link
       valid_lft forever preferred_lft forever
23: veth6c45250@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::84ed:4bff:feea:d488/64 scope link
       valid_lft forever preferred_lft forever
25: veth80b9f39@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP
    inet6 fe80::14a0:5fff:fea6:6600/64 scope link
       valid_lft forever preferred_lft forever

ip -6 r

::1 dev lo  metric 256
fd0c:29f0:2531::/64  metric 100
fd45:7a0a:5395:42f1::/64 dev eth0  metric 100
fe80::/64 dev eth0  metric 100
fe80::/64 dev hassio  metric 256
fe80::/64 dev vethb9bee09  metric 256
fe80::/64 dev veth491ffd6  metric 256
fe80::/64 dev docker0  metric 256
fe80::/64 dev veth21820b3  metric 256
fe80::/64 dev veth9a365a7  metric 256
fe80::/64 dev veth264e13e  metric 256
fe80::/64 dev veth8b5dfb0  metric 256
fe80::/64 dev veth1078d11  metric 256
fe80::/64 dev veth01d7185  metric 256
fe80::/64 dev veth6c45250  metric 256
fe80::/64 dev veth80b9f39  metric 256
anycast fe80:: dev hassio  metric 0
anycast fe80:: dev vethb9bee09  metric 0
anycast fe80:: dev veth491ffd6  metric 0
anycast fe80:: dev docker0  metric 0
anycast fe80:: dev veth21820b3  metric 0
anycast fe80:: dev veth9a365a7  metric 0
anycast fe80:: dev veth264e13e  metric 0
anycast fe80:: dev veth8b5dfb0  metric 0
anycast fe80:: dev veth1078d11  metric 0
anycast fe80:: dev veth01d7185  metric 0
anycast fe80:: dev veth6c45250  metric 0
anycast fe80:: dev veth80b9f39  metric 0
multicast ff00::/8 dev eth0  metric 256
multicast ff00::/8 dev hassio  metric 256
multicast ff00::/8 dev vethb9bee09  metric 256
multicast ff00::/8 dev veth491ffd6  metric 256
multicast ff00::/8 dev docker0  metric 256
multicast ff00::/8 dev veth21820b3  metric 256
multicast ff00::/8 dev veth9a365a7  metric 256
multicast ff00::/8 dev veth264e13e  metric 256
multicast ff00::/8 dev veth8b5dfb0  metric 256
multicast ff00::/8 dev veth1078d11  metric 256
multicast ff00::/8 dev veth01d7185  metric 256
multicast ff00::/8 dev veth6c45250  metric 256
multicast ff00::/8 dev veth80b9f39  metric 256

ip -6 n


fe80::1044:a500:dfc:7acd dev eth0 lladdr 4e:ad:22:9d:e6:45 used 0/0/0 probes 1 STALE
fe80::8ce:e617:be06:f950 dev eth0 lladdr 94:ea:32:6a:83:54 router ref 1 used 0/0/0 probes 1 REACHABLE
fd45:7a0a:5395:42f1:1c11:543d:441b:cd83 dev eth0 lladdr f4:34:f0:03:9f:2a router ref 1 used 0/0/0 probes 1 REACHABLE
fe80::1ce8:8292:9bc6:332a dev eth0 lladdr f4:34:f0:03:9f:2a router ref 1 used 0/0/0 probes 1 REACHABLE
fe80::86:a87d:2ab2:24df dev eth0 lladdr f4:34:f0:7c:2f:7b router used 0/0/0 probes 1 STALE
fd45:7a0a:5395:42f1:1018:b1a4:75e9:a722 dev eth0 lladdr 94:ea:32:6a:83:54 router used 0/0/0 probes 1 STALE
fe80::1ce7:7e1:174c:1d28 dev eth0 lladdr 5a:41:cf:70:84:34 used 0/0/0 probes 1 STALE
fd45:7a0a:5395:42f1:103c:caf3:57d5:c5ab dev eth0 lladdr f4:34:f0:7c:2f:7b router ref 1 used 0/0/0 probes 1 REACHABLE

Same issues with mine. I just took them off home assistant and put them back onto homekit until this can be resolved.

Where does this issue stand? I just tried adding my first thread bulb (Nanoleaf) to HA. Second attempt I was able to add it, got very excited, But it did not take long for it to become unavailable.
I’m running latest version HassOs at 2022.11.2 with 3 HomePod mini’s handling my thread devices. My Apple stuff is running the beta 16.2 version so the new HomeKit.

another attempt - I reboot all my home pods, re-added the bulb to HomeKit. Removed it and added into HA. Fingers crossed…

I have 2 bulbs active now and are working (1 day so far). I do get a these messages a few times.

  • CoAP POST returned unexpected code <aiocoap.Message at 0x7fc104aceda0: Type.ACK 4.04 Not Found (MID 6711, token 41b7) remote <UDP6EndpointAddress [fd6e:cabf:6c84:0:8007:d074:27d0:7d80] (locally fd27:5084:df49:44ed:d0e1:d94b:b8ed:795d%enp0s18)>>
  • Decryption failed, desynchronized? Counter=825/826
  • Failed flailing attempts to resynchronize, self-destructing in 3, 2, 1…

Logger: aiohomekit.controller.coap.pdu
Source: components/homekit_controller/connection.py:711
First occurred: 8:01:39 AM (1 occurrences)
Last logged: 8:01:39 AM

Transaction 0 failed with error 6 (Invalid request

2 Likes

Downloaded latest Home Assistant hoping it would solve the issue with multiple homepods. However the same issue has happened once again. I had everything all set up on my apple TV home hub and as soon as I plugged in the homepod minis all hell broke loose.


New Update: Rebooted Home Assistant Rasp Pi System (Hardware) and so far everything paired up again. Will keep posted if anything changes.

That is what I have discovered as well. I have added a script to restart the rpi on my troubleshooting tab for ease of access, but restarting the raspberry pi definitely works more consistently than restarting hassos. Also, I have added an automation sending a notification to my phone/watch when any of my nanoleaf bulbs changes state to unavailable. when that triggers I will restart my homepod minis and the rasp pi. So far that has consistently reconnected the lights to home assistant.

Update 2: So the restart only worked because I had Bluetooth integration enabled. This was causing my bulbs to be connected to my Raspberry Pi’s Bluetooth. Once I disabled the integration only 3 bulbs were still disconnected and would not reconnect even on restarts. I am going to try and unplug both HomePod minis and then restart to see if the restart needs 1 hub to connect.

It seems like the mesh system is just broken. Hopefully we can get it working with this integration.

Hmm … I have one homepod mini and one nanoleaf strip - I set it up late September, and other than 1 occurrence of the strip being unavailable in mid-Oct, I’ve been rock solid.

I haven’t been keeping accurate score, but it seems that one of the differences my “stable” system and all you folks who seem to have pretty unreliable systems is that I have an extremely limited thread network (both in terms of devices and homepods). Could it be related?

I think it is because you only have 1 border router (i.e 1 homepod mini). I have an apple tv and 2 homepod minis (I.e Multiple Border Routers). The multiple border routers is causing the thread devices to link separately rather than talking to each other. I am not an expert on this so I may be wrong.

Update 3: Unplugged Homepod Minis so that my AppleTV is the only hub and have hardware restarted twice with no success.


Don’t know what else I should do to fix this besides trying to repair again. Quite a frustrating dilemma.

Update 4: It looks like all of these unpaired bulbs are not showing up on hap.udp , so somehow they got disconnected I guess?

Does anyone have Apple’s Thread Border Routers pulling IPv6 prefix delegations and passing IPv6 GUAs onto Thread devices successfully? The standard says they “should”, and I’ve got DHCPv6 on my OPNsense firewall set up with a PD range, but the Apple devices (ATV 4k 3rd gen & HomePod Mini) don’t seem to be even requesting prefixes. Trying to figure out if that’s just expected or if I’ve configured something wrong.

Can you ping the “offline” bulbs, either at their last known IP or by using their .local hostnames? (Might need to try it right from your HASS server depending on your network setup.) That they’re not showing up in _hap._udp. makes me suspect they won’t respond, and if they’re not even reachable that way, the issue is outside of the scope/control of HASS and would appear to be an Apple problem. FWIW, I have two Apple devices acting as TBRs (ATV + HomePod Mini) and don’t have the kinds of issues you’ve described.

Edit: (Also, State sucks! Go Blue!)

Fix HomeKit CoAP connection getting RST incorrectly by Jc2k · Pull Request #82553 · home-assistant/core · GitHub should help with “sleepy end devices” (reduces log noise and avoids RST’ing a working connection) however we are still sending dupe packets. We are working with the CoAP library maintainer so that we can avoid this for battery powered thread devices, but thats still in progress.

Regarding multiple thread border routers - this might not be our bug. E.g. https://www.reddit.com/r/HomeKit/comments/yvq64y/experience_to_date_going_all_thread/. Sounds like right now thread gets in a tizzy if it has to reorganise itself and even with native iOS this can be a big mess.

Oh and my environment continues to be fairly solid - like @putch I only have one border router.

Hi! What’s the state of provisioning homekit thread devices on a thread network? I’m trying to decide whether I should buy a HomePod mini as a thread border router, or whether I should wait till I can run an open thread border router via my silabs based Zigbee stick.

Does not operate reliable

Is that 1st link you posted trying to resolve the issue with the log image below? x223 sometimes even gets to x2000!

Yes, that PR is part of the fix for the first one. There is still more work to do.

For every one of those errors it means we sent a message while a device was asleep to conserve battery. Thread queue’s those messages for up to 5s, but the default CoAP timeout is 2s. So 3 in 5 messages “time out” and we send a duplicate. The device replies to both of those messages, and the CoAP library we use doesn’t know how to deal with the duplicate reply. The main fix is to not crap out if that happens. But there is a secondary fix in the pipeline to not send the duplicates in the first place.

1 Like

@lambdafunction has been able to provision devices from within HA using a service call over bluetooth. You can track the work here: Add Thread provisioning to BLE by roysjosh · Pull Request #148 · Jc2k/aiohomekit · GitHub. It’s not yet ready to go into HA properly though, involves a bunch of git checkouts etc.

I use a HomePod, and provision everything with iOS, then unpair (which leaves it on the thread network but HA can then pair with it instead). Apart from the log messages above ^, it’s been pretty stable for Eve and Wemo accessories.

1 Like

I have almost the exact same setup and issue as @rahil, running HA on unRaid with Eve products that get picked up but error when trying to pair. Been reading this thread and hopefully close to the same solution for @rahil. Wanting to check first though.

ip -6 a on my unRaid

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fd7e:37b8:e809:465c:dabb:c1ff:fe9f:a4c5/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 1707sec preferred_lft 1707sec
    inet6 fe80::dabb:c1ff:fe9f:a4c5/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::42:5eff:fe05:c4d3/64 scope link 
       valid_lft forever preferred_lft forever
5: shim-eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 1000
    inet6 fd7e:37b8:e809:465c:d8bb:c100:19f:a4c5/64 scope global dynamic mngtmpaddr 
       valid_lft 1707sec preferred_lft 1707sec
    inet6 fe80::d8bb:c100:19f:a4c5/64 scope link 
       valid_lft forever preferred_lft forever
7: veth9465acf@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::9443:aeff:fe70:13e1/64 scope link 
       valid_lft forever preferred_lft forever
11: veth5d89a13@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::ac9a:61ff:fea3:c3b8/64 scope link 
       valid_lft forever preferred_lft forever
13: veth25d3a49@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::984c:4aff:fe7b:9262/64 scope link 
       valid_lft forever preferred_lft forever
17: vetha3240f7@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::a426:4eff:feb2:9cdc/64 scope link 
       valid_lft forever preferred_lft forever
19: veth3936fc8@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::b4b1:33ff:fea6:cfd/64 scope link 
       valid_lft forever preferred_lft forever
21: veth8339b73@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::48d9:5ff:fe81:c4d/64 scope link 
       valid_lft forever preferred_lft forever
28: veth95b1fdc@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    inet6 fe80::3891:70ff:fe0e:830e/64 scope link 
       valid_lft forever preferred_lft forever

ifconfig on my Mac

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
	inet 127.0.0.1 netmask 0xff000000 
	inet6 ::1 prefixlen 128 
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
	nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
anpi0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether f2:f6:5e:3c:24:55 
	inet6 fe80::f0f6:5eff:fe3c:2455%anpi0 prefixlen 64 scopeid 0x4 
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
anpi1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether f2:f6:5e:3c:24:56 
	inet6 fe80::f0f6:5eff:fe3c:2456%anpi1 prefixlen 64 scopeid 0x5 
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=50b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV,CHANNEL_IO>
	ether 3c:a6:f6:8f:95:e1 
	inet6 fe80::5d:b6a1:7c9e:f1c9%en0 prefixlen 64 secured scopeid 0x6 
	inet6 fd7e:37b8:e809:465c:41c:4314:77d1:52a8 prefixlen 64 autoconf secured 
	inet 192.168.0.211 netmask 0xffffff00 broadcast 192.168.0.255
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect (1000baseT <full-duplex,flow-control>)
	status: active
en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether f2:f6:5e:3c:24:35 
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
en5: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether f2:f6:5e:3c:24:36 
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=460<TSO4,TSO6,CHANNEL_IO>
	ether 36:45:96:0e:74:c0 
	media: autoselect <full-duplex>
	status: inactive
en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=460<TSO4,TSO6,CHANNEL_IO>
	ether 36:45:96:0e:74:c4 
	media: autoselect <full-duplex>
	status: inactive
ap1: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 3e:a6:f6:9e:ab:9c 
	media: autoselect
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
	ether 3c:a6:f6:9e:ab:9c 
	inet6 fe80::8e1:2ae5:ba32:fe2d%en1 prefixlen 64 secured scopeid 0xc 
	inet 192.168.0.189 netmask 0xffffff00 broadcast 192.168.0.255
	inet6 fd7e:37b8:e809:465c:4d8:9400:bd1:c05f prefixlen 64 autoconf secured 
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 36:45:96:0e:74:c0 
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x0
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 9 priority 0 path cost 0
	member: en3 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 10 priority 0 path cost 0
	nd6 options=201<PERFORMNUD,DAD>
	media: <unknown type>
	status: inactive
awdl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
	ether a2:6f:6d:f6:5f:2c 
	inet6 fe80::a06f:6dff:fef6:5f2c%awdl0 prefixlen 64 scopeid 0xe 
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
llw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether a2:6f:6d:f6:5f:2c 
	inet6 fe80::a06f:6dff:fef6:5f2c%llw0 prefixlen 64 scopeid 0xf 
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::5b76:2c1d:6847:f308%utun0 prefixlen 64 scopeid 0x10 
	nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
	inet6 fe80::46df:4960:f47b:7716%utun1 prefixlen 64 scopeid 0x11 
	nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1000
	inet6 fe80::ce81:b1c:bd2c:69e%utun2 prefixlen 64 scopeid 0x12 
	nd6 options=201<PERFORMNUD,DAD>
utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::a12b:89:2706:8953%utun3 prefixlen 64 scopeid 0x13 
	nd6 options=201<PERFORMNUD,DAD>
utun4: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::a121:e01f:77a3:e70%utun4 prefixlen 64 scopeid 0x14 
	nd6 options=201<PERFORMNUD,DAD>

I believe the key information is this on the unpaid;

3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fd7e:37b8:e809:465c:dabb:c1ff:fe9f:a4c5/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 1707sec preferred_lft 1707sec
    inet6 fe80::dabb:c1ff:fe9f:a4c5/64 scope link 
       valid_lft forever preferred_lft forever

The difference between with my setup is I have my HA container set to Host and not br0. So do I need to set the following;
net.ipv6.conf.eth0.accept_ra=2
and
net.ipv6.conf.bond0.accept_ra=2

Difference in the first command being eth0 and not br0?