Life360 Device Tracker Platform

Per the post above I’ve added a service – device_tracker.life360_zones_from_places – that can be called to force an update of HA zones created from Life360 Places. The service, however, will only be registered if add_zones is not False (i.e., it’s explicitly set to True or include_home_place, or zone_interval has been defined.)

If you’d like to try it, you can use the link to the beta version in the previous post. (The version is now 2.6.0b2.) As always, I’d appreciate any feedback.

With the addition of this service you have a choice. You can have HA periodically check for new (or changed or deleted) Life360 Places and update HA zones accordingly (by defining zone_interval), or you can manually cause this to happen after you’ve made changes to your Life360 Places (by using the new service), or you can do both. Note that if add_zones is not False the the process will always happen at least once when HA starts. So another option is to not use zone_interval or the new service, but simply restart HA if/when you make changes to your Life360 Places.

automations.yaml post it before

configuration.yaml

device_tracker:

  • platform: life360
    prefix: fernando
    username: [email]
    password: !secret life360_fernando_password
    interval_seconds: 10
    filename: life360_fernando.conf
    add_zones: false
    max_update_wait:
    minutes: 30
  • platform: life360
    prefix: raquel
    username: [email]
    password: !secret life360_raquel_password
    interval_seconds: 10
    filename: life360_raquel.conf
    add_zones: false
    max_update_wait:
    minutes: 30
  • platform: life360
    prefix: mamen
    username: [email]
    password: !secret life360_mamen_password
    interval_seconds: 10
    filename: life360_mamen.conf
    add_zones: false
    max_update_wait:
    minutes: 30
  • platform: life360
    prefix: carmen
    username: [email]
    password: !secret life360_carmen_password
    interval_seconds: 10
    filename: life360_carmen.conf
    add_zones: false
    max_update_wait:
    minutes: 30
  • platform: nmap_tracker
    hosts: 192.168.0.1-254 #tu red y el rango de IP’s que quieres escanear
    home_interval: 15
    exclude:
    • 192.168.0.169
      interval_seconds: 30
      track_new_devices: yes

group.yaml

Familia:
name: Familia
view: yes
entities:
- group.fernando
- group.mamen
- group.raquel
- device_tracker.fernando_fernando_castro_ruiz
- device_tracker.fernandoc
- device_tracker.mamen_mamen_zapata
- device_tracker.raquel_raquel

zones.yaml

  • name: Home
    latitude: !secret latitude_home
    longitude: !secret longitude_home
    radius: 250
    icon: mdi:home

Hi, the best if you use one of the editors of hassio addons.
ie: Community Hass.io Add-on: IDE, based on Cloud9
This runs in the same browser session as the hassio interface, I have not had any permission issues with files, folders I created through this.

1 Like

My first question is, why do you have each person configured separately? Don’t you have a Life360 Circle with all these Life360 Members in it? If so you only need one life360 device_tracker configured (which would typically be for the “admin” of the Circle, but I guess it doesn’t matter.)

If all the Life360 Members are not in a common Life360 Circle, I guess it’s ok to do it this way. I just didn’t expect anyone would. :slight_smile:

Do you see any errors in the log associated with life360? Specifically, do you see any communication errors?

The bottom line is, both methods (this custom component, and the MQTT-based one you used before) query the same Life360 server. They are getting the same data. I can’t explain why this custom component would be any less responsive than the other, except for maybe if the system on which you’re running HA has a less reliable Internet connection (in which case you should see communication errors like I mentioned above.)

You might want to look in home-assistant.log. Make sure you have logger set to debug. You will see messages like these when the custom component gets new, updated location information for users:

pi@raspberrypi:/home/homeassistant/.homeassistant $ grep 'custom_components\.device_tracker\.life360' home-assistant.log
2019-02-06 12:37:50 INFO (MainThread) [homeassistant.loader] Loaded device_tracker.life360 from custom_components.device_tracker.life360
2019-02-06 12:37:59 DEBUG (Thread-6) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-06 12:37:59 DEBUG (Thread-6) [custom_components.device_tracker.life360] Configured members = None
2019-02-06 12:38:00 DEBUG (Thread-6) [custom_components.device_tracker.life360] Updating life360_xxx
2019-02-06 12:38:00 DEBUG (Thread-6) [custom_components.device_tracker.life360] Updating life360_yyy
2019-02-06 12:38:00 DEBUG (Thread-6) [custom_components.device_tracker.life360] Updating life360_zzz
2019-02-06 12:39:20 DEBUG (Thread-14) [custom_components.device_tracker.life360] Updating life360_yyy; Time since last update: 0:06:08
2019-02-06 12:39:31 DEBUG (Thread-4) [custom_components.device_tracker.life360] Updating life360_xxx; Time since last update: 0:09:05

Another idea would be to run both methods side-by-side – i.e., this custom component and the MQTT-based one you used before. Of course, you’d have to use different device_tracker object_id’s, but that should be easy using the prefix life360 configuration variable. At least do this for one person, and see how the two different device_trackers behave.

Hi, i use different users in life360 because i don´t want share location between family members, only home hassistant know all locations and free account life360 don´t let me do.

HA is the same machine in both configurations, same connection, etc…

I think log is correct

pi@hassbian:/home/homeassistant/.homeassistant $ grep 'custom_components\.device_tracker\.life360' home-assistant.log
2019-02-08 09:06:52 INFO (MainThread) [homeassistant.loader] Loaded device_tracker.life360 from custom_components.device_tracker.life360
2019-02-08 09:06:58 DEBUG (Thread-6) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-08 09:06:58 DEBUG (Thread-6) [custom_components.device_tracker.life360] Configured members = None
2019-02-08 09:06:58 DEBUG (Thread-11) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-08 09:06:58 DEBUG (Thread-22) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-08 09:06:58 DEBUG (Thread-11) [custom_components.device_tracker.life360] Configured members = None
2019-02-08 09:06:58 DEBUG (Thread-4) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-08 09:06:58 DEBUG (Thread-22) [custom_components.device_tracker.life360] Configured members = None
2019-02-08 09:06:58 DEBUG (Thread-4) [custom_components.device_tracker.life360] Configured members = None
2019-02-08 09:06:58 DEBUG (Thread-6) [custom_components.device_tracker.life360] Updating carmen_carmen
2019-02-08 09:06:58 DEBUG (Thread-4) [custom_components.device_tracker.life360] Updating raquel_raquel
2019-02-08 09:06:58 DEBUG (Thread-11) [custom_components.device_tracker.life360] Updating fernando_fernando_castro_ruiz
2019-02-08 09:06:58 DEBUG (Thread-22) [custom_components.device_tracker.life360] Updating mamen_mamen_zapata

i try your idea and post it to you in some days

Ok, that makes sense. That should not be a problem.

Hmm, ok. Yes, please let me know how it goes.

Released 2.6.0

Adds new choices for the add_zones config variable. The choices are now:

  • false – Do not create HA zones from Life360 Places
  • only_home – Only update zone.home per Life360 “Home” Place
  • except_home – Create HA zones from all Life360 Places, but do not update zone.home. (Same as true for backward compatibility.)
  • all – Create HA zones from all Life360 Places, including updating zone.home

If any choice but false is selected, also register a new service – device_tracker.life360_zones_from_places – which can be called to force an update of HA zones per the selected choice. Note that whether or not this service is used, or whether or not a zone_interval is set, if add_zones is not false, then the process will be executed at least once at startup.

Also bumped the version of the lower level life360 package (which is automatically installed from PyPI) to 2.1.0, which now handles HTTP 403 errors from the Life360 server.

Hi @pnbruckner

what does it mean “Also bump the lower level life360 package (from PyPI) to 2.1.0, which now handles HTTP 403 errors from the Life360 server.

Do I need to do something else other than update the life360.py ?

Thanks

Sorry, that was a bit vague. There are still two pieces. One is custom_components/device_tracker/life360.py which you need to install and update. The other piece is a package on PyPI. The first one specifies which version of the second should be installed automatically. Version 2.6.0 of the first piece now specifies version 2.1.0 of the second piece. Does that make sense, or did I just make it more confusing? (I probably should have said, “Also bumped the …”

Thanks.

So, reading some posts in this 3d, the PyPi package will be downloaded automatically, is it correct?

Yes, the package from pypi.org will be downloaded and installed automatically.

Not sure if this is Life360 related, but issues started shortly after the last update to 2.6. Will move out of this thread if required.
My HA docker keeps restarting every 6-8 hours.
I’m running HA on docker, issue started on 0.86.3 and remains on 0.87.0
I can’t see anything in the HA logs, the docker logs (from portainer) only really show HA logs.
This doesn’t appear to be RAM related (no spike at the time and only ~50 usage. There are times when RMA usage is higher, like upper 70% but no container reboot then.)
I’m not too sure how to troubleshoot this one.
Any pointers?

By all means, let me know if you can determine this code is somehow at the root of the problem. I’m still on 0.84.6, so there might be something in all the re-architecting they’re doing that has caused problems with this. I’m in the process of getting my system more ready to deal with all the recent changes, especially moving to Lovelace, so hopefully I’ll be able to test with a newer HA before too long.

Thanks Phill. Will try and disable the component to see if it makes a difference…
Interestingly my uptime is the highest I’ve seen so far so if it keeps being up it’s going to be a difficult one to troubleshoot :frowning:
image

HI, please see Instant not_home status? for a short description of my CC suddenly stopped working. Ive updated and use this config:

device_tracker:
  - platform: life360
    username: !secret life360_username
    password: !secret life360_password
    prefix: life360
    show_as_state: places, driving, moving
    driving_speed: 20
    interval_seconds: 30
    max_gps_accuracy: 200
    max_update_wait:
      minutes: 15
    add_zones: false
    time_as: device_or_local
    warning_threshold: 1
    error_threshold: 2

an older version on my second system still works fine, so I know Life360 isn’t the culprit

@lolouk44, @Mariusthvdb

I checked my recent changes again, and I can’t see anything obvious I changed that could cause these types of problems. There certainly could be – it’s just not obvious at this point.

Could you make sure DEBUG is turned on for custom_components.device_tracker.life360 and take a look at the messages it outputs to the log? Anything out of the ordinary or suspicious there? (If you’d like to share your log messages via PM, that’s fine.)

@Mariusthvdb, can you expand on how it’s failing? Are the device_tracker entities simply no longer updating? Are you seeing any errors in the log? Do you see any messages like these in home-assistant.log?

pi@raspberrypi:/home/homeassistant/.homeassistant $ grep 'custom_components\.device_tracker\.life360]' home-assistant.log
2019-02-12 10:08:40 DEBUG (Thread-10) [custom_components.device_tracker.life360] Life360 communication successful!
2019-02-12 10:08:40 DEBUG (Thread-10) [custom_components.device_tracker.life360] Configured members = None
2019-02-12 10:08:41 DEBUG (Thread-10) [custom_components.device_tracker.life360] Updating life360_xxx
2019-02-12 10:14:55 DEBUG (Thread-8) [custom_components.device_tracker.life360] Updating life360_xxx; Time since last update: 0:08:54
2019-02-12 10:18:24 DEBUG (Thread-19) [custom_components.device_tracker.life360] Updating life360_xxx; Time since last update: 0:03:25

yes that’s what was happening in once HA instance.
Ive waited longer than the max_update setting and nothing happened.

Seemed they were not only not updated, but not even initialized because normally when one clicks the more-info, the full list of attributes is displayed, and now only the regular history graph with home/not_home was shown.

Ive rebooted once again, and now they’re back! Which would at least suggest my settings are ok… Hope it was a short hiccup. Hadn’t experience it before…

If it happens again, please search home-assistant.log for messages from the component (as I showed above). There will probably be useful information there. Unfortunately, you restarted without doing that, now that information is gone.

One possibility I can think of that might explain what you experienced – at startup it will attempt to communicate with the server to see if everything is setup correctly. If that fails, for any reason, the platform setup will fail. You have to restart to get it to try again. So, for example, when HA starts, if you have a temporary Internet connection problem, or if the server just doesn’t respond for some reason, that will prevent the platform from starting. If this happens there would be messages in home-assistant.log that would provide the details of what went wrong.

I did recognize that this is a potential problem, and I even created an Issue so that I eventually get around to better handling this situation. I haven’t done anything about it yet, partly because I haven’t heard of anyone experiencing this type of problem (and I haven’t noticed it myself either.)

Thanks. I’ll first deactivate the custom package and will track. If it doesn’t crash then I’ll look into the logs.
I’ve installed custom_components at the same time and I’ve seen an update recently. Currently I only use custom_components for life360 so I’m also disabling this for now and will reactivate either/both depending on how the container behaves.
I tried to look at the docker logs (sudo journalctl -fu docker.service) but could not find anything relevant…
I did find some updates in some of the lovelace custom cards I was using so updated these just in case.
I also found I had 2 occurrences of the kiosk card in my lovelace resources so removed the extra one.
Now I just need to wait…