How to get rid of duplicate device_tracker sensor

Hi,
I’m tracking the devices in my home network using the snmp component to monitor MAC addresses of connected devices at my wifi accesspoint.
I’m also using owntracks to track the location of my phone.
According to the docs, you can combine those two methods by modifying the known_devices.yaml file so the device uses the owntracks-supplied “name” (e.g. “iphone_iphone:”) and the MAC-address gathered from the snmp component.
In case of my phone, this works really well.

I now configured my laptop to regularly publish an mqtt message with “faked home location data” when it finds itself on the home network, so this gets picked up by the owntracks component.
That works as expected and I ended up with two entries in my known_devices.yaml - one from owntracks and one from the snmp tracker.
I merged the two entries like I did with the phone, but I’m still seeing a second sensor for the “old” device and I can’t seem to get rid of that.

The record in my known_devices.yaml looks like this:

probook_probook:
  name: 'Macbook Pro'
  mac: xx:xx:xx:xx:xx:xx
  picture:
  track: yes
  hide_if_away: no

It was named just “probook” before.
I have a group containing the device:

name: 'Tracked Devices'
entities:
  [...]
  - device_tracker.probook_probook

In the UI I still see a circle “probook” (the older name) at the top and I still have a device_tracker.probook entity lurking around in HA, as I can see in the Developer Tools/States section ("< >").
Any ideas how I can get rid of that?

TIA,
Sebastian

To answer myself (again ;))…
The duplicate sensor was caused by appdaemon’s switch_reset app that reintroduced the old device_tracker sensor after each HA restart.

I need to have a closer look at appdaemon as it currently (for me) does not seem to keep up with state changes…

Sebastian

I have the same issue with the AppDaemon switch reset. I tough it was my config (AIO). Still hoping Andrew can help us here as this is quite essential functionality.

For some reason, changes to the controls’ values are not written to the database file.
The app receives all changes as can be verified by uncommenting the self.log_notify() line of the state_change() function.
Although it shouldn’t be necessary, I tried adding a self.device_db.sync() after the new value is written to the db.
Still the file modification date of the db file does not change when changing one of the slider values and state_change() is triggered.

Restarting HA while appdaemon is running causes a write to the DB file.
But the values that are restored afterwards are totally ancient.
Strange.

@aimc: any idea?

Sebastian

At this point, no, it should just work, and it seems to work fine for me. I’ll try some more to reproduce it but for now I am stumped. Is it one control type in particular, or all of them?

Just ran a test - for all 4 types, my switches.db file is updated as soon as I change them …

What versions of Python are you both running? Maybe it is a bug in the Python Library.

It’s at least the case for sliders and booleans; I’m not 100% certain on select controls, but I think those show the same behavior.

What I find strange is, if I do this:

# python3
Python 3.4.3 (default, Aug  9 2016, 17:10:39)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> db=shelve.open('/tmp/db', 'c')
>>> db['foo']='bar'
>>> db['bar']='baz'

my “db” testfile is updated with every assignment operation (i.e. the timestamps reported by stat(1) change).
The state_change() function does exactly the same but the file is not updated when the self.device_db[entity] = new line is executed.

Sebastian

That’s a useful test - thanks. The only variable I can think of is where the switches.db file goes and the permissions of the file and containing directories. I am running my AppDaemonas root and have the file in /etc/appdaemon. How do you have yours setup?

Sounds identical to my issue. Also on the AIO installer?

@aimc after my last post tried different locations directories. Also with different permissions.

Tested so far:
/home/hass/.homeassistant
/home/hass/.homeassistant/temp
/home/pi/appdaemon/Temp
/home/pi/appdaemon

It can’t be a permissions problem because the db file is auto-created by appdaemon when it doesn’t exist and also it gets updated at certain points (e.g. restarting HA does update it).
I’m running appdaemon from a docker image created from the current github repo (no changes to the Dockerfile you supplied).
Maybe it’s a Docker-related issue?

Sebastian

Unlikely to be docker related as I have exactly the same issue on an RPI3 All in one install.

Im a noob Home-assistant user. and i am having the same issue.
its like it does not want to save any new info for devices.

anything i can do about this?

@aimc: Ok, I need to correct myself - restarting HA also does not necessarily trigger an update.
I now have the effect, that starting appdaemon creates the db file and - according to the logs - adds all discovered switches.
When restarting HA, it seems that the db file is completely empty because all the switches are discovered again:

2016-10-19 13:16:29.958153 INFO Switch Reset: State change: input_slider.heating_office_night to 15.0
2016-10-19 13:16:29.963358 INFO Switch Reset: Syncing.
2016-10-19 13:17:00.787704 WARNING Not connected to Home Assistant, retrying in 5 seconds
2016-10-19 13:17:05.868092 WARNING Not connected to Home Assistant, retrying in 5 seconds
2016-10-19 13:17:10.911663 WARNING Not connected to Home Assistant, retrying in 5 seconds
2016-10-19 13:17:15.952848 WARNING Not connected to Home Assistant, retrying in 5 seconds
2016-10-19 13:17:21.058789 INFO Got initial state
2016-10-19 13:17:23.060802 INFO Reloading Module: /conf/.apps/switch_reset.py
2016-10-19 13:17:23.062416 INFO Loading Object Switch Reset using class SwitchReset from module switch_reset
2016-10-19 13:17:23.062853 INFO Waiting for App initialization: 1 remaining
2016-10-19 13:17:24.064185 INFO App initialization complete
2016-10-19 13:17:24.068932 INFO Switch Reset: Home Assistant restart detected
2016-10-19 13:17:34.008258 INFO Switch Reset: Setting switches
2016-10-19 13:17:34.009166 INFO Switch Reset: Adding input_slider.heating_office_night, setting value to current state (16.0)
[... all the other observed controls are added again ...]

I changed the state_change() function so it calls a sync() every time a new value is written to the shelf (which should not be necessary since the file should be opened with writeback=False):

  def state_change(self, entity, attribute, old, new, kwargs):
    self.log_notify("State change: {} to {}".format(entity, new))
    try:
      self.device_db[entity] = new
    finally:
      self.log_notify("Syncing.")
      self.device_db.sync()

Anyway, to make sure, I also added an explicit writeback=False to the open() call.
Still, the only time that is written to the db file is when it is created.
I then temporarily changed the path to the db file to ‘/tmp/switches.db’ so it’s created inside the Docker container.
Same behavior.

I also changed the Dockerfile to build on python:3.5 instead of python:3.4 - nothing changed.

I even used 'strace -f -p ’ to attach to the process that has the db file open and looked for attempted writes. There’s absolutely nothing.

Something is really strange here…

Sebastian

If has to be something in the shelve library - we have proven that the state changes are being received …

Only thing I can think of is to reimplement something like shelve in the app.

I also think it’s either shelve or the underlying dbm module.
I just replaced shelve with dbm.dumb and at least the writes seem to work now.
I wanted to try dbm.gnu as this would be an ideal replacement, but trying to import it throws an error; probalby something is missing in the Docker container:

2016-10-19 14:17:38.043684 WARNING ------------------------------------------------------------
2016-10-19 14:17:38.044004 WARNING Unexpected error during loading of switch_reset.py:
2016-10-19 14:17:38.044179 WARNING ------------------------------------------------------------
2016-10-19 14:17:38.066928 WARNING Traceback (most recent call last):
  File "/usr/local/lib/python3.5/site-packages/appdaemon/appdaemon.py", line 665, in readApp
    importlib.reload(conf.modules[module_name])
  File "/usr/local/lib/python3.5/importlib/__init__.py", line 166, in reload
    _bootstrap._exec(spec, module)
  File "<frozen importlib._bootstrap>", line 626, in _exec
  File "<frozen importlib._bootstrap_external>", line 665, in exec_module
  File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
  File "/conf/.apps/switch_reset.py", line 3, in <module>
    import dbm.gnu
  File "/usr/local/lib/python3.5/dbm/gnu.py", line 3, in <module>
    from _gdbm import *
ImportError: No module named '_gdbm'

I need to look into this.
Anyhow, there’s now an error appaerently because the data inside the db is not serialized objects:

2016-10-19 14:27:22.066047 WARNING ------------------------------------------------------------
2016-10-19 14:27:22.069560 WARNING Unexpected error:
2016-10-19 14:27:22.072037 WARNING ------------------------------------------------------------
2016-10-19 14:27:22.137853 WARNING Traceback (most recent call last):
  File "/usr/local/lib/python3.5/site-packages/appdaemon/appdaemon.py", line 418, in worker
    function(ha.sanitize_timer_kwargs(args["kwargs"]))
  File "/conf/.apps/switch_reset.py", line 60, in set_switches
    new_state = self.set_state(entity, state = self.device_db[entity])
  File "/usr/local/lib/python3.5/site-packages/appdaemon/appapi.py", line 164, in set_state
    r = requests.post(apiurl, headers=headers, json=args, verify = conf.certpath)
  File "/usr/local/lib/python3.5/site-packages/requests/api.py", line 110, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/requests/api.py", line 56, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/requests/sessions.py", line 461, in request
    prep = self.prepare_request(req)
  File "/usr/local/lib/python3.5/site-packages/requests/sessions.py", line 394, in prepare_request
    hooks=merge_hooks(request.hooks, self.hooks),
  File "/usr/local/lib/python3.5/site-packages/requests/models.py", line 297, in prepare
    self.prepare_body(data, files, json)
  File "/usr/local/lib/python3.5/site-packages/requests/models.py", line 428, in prepare_body
    body = complexjson.dumps(json)
  File "/usr/local/lib/python3.5/json/__init__.py", line 230, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/lib/python3.5/json/encoder.py", line 198, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/lib/python3.5/json/encoder.py", line 256, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/lib/python3.5/json/encoder.py", line 179, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'16.0' is not JSON serializable

But I think we’re on the right path…
Still wondering why shelve behaves this way, even when “upgrading” from a python 3.4 docker image to 3.5.

Sebastian

I’m not doing anything special with the underlying DBM so it is probably using whatever I have installed, which could explain differences across the various platforms/images. Also, my understanding is that shelve is meant to serialize before it writes to DBM … but definitely progress! I’ll see if I can figure out which DBM module I am using.

EDIT:

Found this on Stack Overflow:

I think there is no way to specify the underlaying database yourself. shelve uses anydbm and anydbm uses the whichdb module which tries the following underlaying implementations in the following order

dbhash
gdm
dbm
dumbdbm

And by elimination it seems I am using DBM:

hass@Pegasus:~$ python3
Python 3.5.2 (default, Jul  5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dbhash
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'dbhash'
>>> import gdm
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named 'gdm'
>>> import dbm
>>>

To my understanding, shelve just wraps up two components: Pickle to serialize objects and dbm.* to store them in a database.
What kind of database depends on the available/installed dbm modules.
You should be able to find the files inside the python search path.
There should be a directory ‘dbm’ containing a file for each db type.
I just changed the Docker base image to python3.6, so the stuff (inside the container) is in:

# ls -l /usr/local/lib/python3.6/dbm/
total 28
-rw-r--r-- 1 root staff  5783 Oct 12 20:13 __init__.py
drwxr-sr-x 2 root staff   113 Oct 19 12:41 __pycache__
-rw-r--r-- 1 root staff 11841 Oct 12 20:13 dumb.py
-rw-r--r-- 1 root staff    72 Oct 12 20:13 gnu.py
-rw-r--r-- 1 root staff    70 Oct 12 20:13 ndbm.py

For each file, you should be able to do an import dbm.(gnu|dumb|ndbm) and use this specific db type.

Since the switch_reset.py only stores basic types (strings, ints, floats…) for each entity, it should not actually be necessary to store serialized objects - instead it should work to just put something like entity_name:'state' into the db. At least that’s what I assumed.

So my thought was to just replace shelve with one of the dbm types, skipping the serialization part, but apparently something broke… I still have to look into that.

Have you been able to reproduce the issue with the appdaemon Docker image?
Using that, we should at least have the same base.

Sebastian

Haven’t tried yet, but I’ll take a look.

Ok, so I now build the appdaemon Docker image based on the python:3.4-alpine image.
Apparently the only usable dbm module there is dbm.dumb, so shelve is automatically using this as db backend.
With this, data storage finally works as expected.
Dbm.dumb might not be the best choice as db backend, though, so I only consider this a workaround.
But as a conclusion I’d say there’s something wrong with whichever dbm module is chosen.
Also Shelve and Pickle don’t seem to be culprits.

@aimc, since it is working for your setup, could you test which dbm module is used in your case?

# python3
Python 3.4.3 (default, Aug  9 2016, 17:10:39)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dbm
>>> from dbm import *
>>> dbm.whichdb('switches')
'dbm.dumb'

This should give you the type of the db file (“switches” in my case) that’s used on your system.

Sebastian