I’m really struggling with pairing any additional sensors against zigbee2mqtt and my CC2531 stick. In theory there’s capacity on the network but in practice I have a feeling z2m is clinging on to historic devices and/or partially/unsuccessfully paired devices and is silently refusing to accept any more clients.
I’ve had a look at my database.db and it looks odd.
There are fifteen devices beneath the {"$$indexCreated":{"fieldName":"id","unique":true,"sparse":false}} and fourteen above.
I’m wondering what the significant of the {"$$indexCreated":{"fieldName":"id","unique":true,"sparse":false}} is, and what the difference between the two sets of devices are. Whether one is a historic set of devices no longer in use etc.
I guess I have add on questions:
Could I use a SECOND Home Assistance instance and a second CC2531 to successfully pair up the sensors which won’t pair up against my production Home Assistance instance and then maybe manually transfer them over to the database??
Alternatively could I somehow ‘clean up’ the database.db so it’s ready to accept more devices??
You definitely shouldn’t have duplicate “id:” entries or multiple entires per ieeeAddr in your database.db file. The "$$indexCreated":{"fieldName":"id","unique":true,"sparse":false}} shouldn’t be in there either. I assume that something crashed and zigbee2mqtt got confused. You can manually edit the file and remove that line as well as any duplicates and see it still works. (Usual caveat: stop zigbee2mqtt, make a backup, edit the file, restart zigbee2mqtt).
Since you only have end devices, any connection is directly to the coordinator (the CC2531). 15 end devices seems to be the limit for the CC2531. You need to replace an end device with a router (i.e. a smart plug) or two which allows you to increase the number of end devices. You will have to remove an end device first to make room on the coordinator for the router.
You can run a seperate instance of zigbee2mqtt on a seperate computer (people even use Pi Zeros for that), but you would have to change the network_key for that instance and repair the devices to that instance. You can’t just transfer the database.db file, but will have to start with a new one.
Looking at the database.db again and comparing it to decices.yaml [sic], it looks like the top list/array is the correct one and the bottom one is the bloated one. I’m dividing the two lists at the {"$$indexCreated": point. The difference seems to be the bottom list has the coordinator in twice?