TTLock state issues with multiple locks


Hopefully this is the right place for this. I have 2 locks that use the TTLock interface. I have successfully created sensors that sense the state of the openlock state via their API. 0 is locked 1 is unlocked 2 is unknown and then there is unavailable. I have created 2 sensors 1 for each lock

Example of lock state sensor
sensor Basement Lock State:

  • platform: rest
    name: “Basement Lock State”
    scan_interval: 5
    resource_template: URL to query lock state
    value_template: ‘{{ value_json.state }}’

They have different lockids in the API query to get the state of the lock. I then have a custom button that reads the state of the lock, displays a red lock icon if its state = 0 and green if state = 1 with a tap action to perform the API request to unlock the lock.

type: custom:button-card
entity: sensor.basement_lock_state

  • value: ‘0’
    icon: mdi:lock
    color: red
    name: Basement Door Locked
  • value: ‘1’
    icon: mdi:lock-open
    color: green
    name: Basement Door Unlocked
    action: call-service
    service: rest_command.unlock_door

method: POST
content_type: “application/x-www-form-urlencoded”
payload: “clientId=XXXXXXX&accessToken=XXXXXXXX&lockId=XXXXX&date={{ (now().timestamp() | int * 1000 ) }}”
method: POST
content_type: “application/x-www-form-urlencoded”
payload: “clientId=XXXXXXXXXXXXXX&accessToken=XXXXXXXXXXXXX&lockId=XXXXXXXX&date={{ (now().timestamp() | int * 1000 ) }}”

This all works great when there is only 1 of the sensors looking for the door lock state. Once I add a second sensor in to monitor the second lock state, both sensors report the state as 2 for unknown or unavailable. This wrecks my buttons as the button is look for 0 or 1 as a state. Anyone know what could be causing this?

Hi! I have the same problem, where lock status is generally coming up as a 2. Very unhelpful.

You might want to xxxx out your client ID and access token in the first rest_command in your post above, as those are sensitive information.

I see you’ve solved my problem, trying to get the timestamp in the POST payload to evaluate correctly. Thanks!

I’m getting pretty frustrated with the very slow speed of the gateway responses, which makes real-time locking and unlocking difficult. Are you find this too?

Thank you Arrows. I edited one part but not that other. Mine respond fairly quickly. But I still haven’t figured out how to get them to report the correct status after adding a second lock.

I’m not sure what the cause of the lock state problem is, but I do occasionally get a non-2 response back from my REST sensor pulling the lock state:

My sensor, in sensors.yaml:

- platform: rest
  name: Front door lock state numeric
  scan_interval: 30
  resource_template:{{ (now().timestamp() | int * 1000 ) }}
  value_template: '{{ value_json.state }}'

I have tried this using the APIs at Sciener and TTLOCK with no change in success rate. (I’m not sure which one of those companies is the parent, but their APIs are identical and their phone apps read from the same database. My local lock company, E-LOK, has obviously relabeled the Sciener/TTLOCK app.)

We are using different scan_interval values - is yours scanning every 5 seconds? My gateway takes at least 10 seconds to respond each time. Have you tried a longer scan_interval?

I have the 1 set to 5 seconds at the moment because I want it to report back quickly. I have tried longer scan intervals, but not out to 30 seconds. I’ll try setting it that long and see what happens.

I’ve set it to 30 seconds on both of my lock state sensors. They both report back state of “Unavailable”