Logi Circle camera with HASS

I just had the same error after rebooting my raspberry pi. Restarting home assistant manually as suggested in the error message seems to have fixed it.

Hmm, maybe Logi’s services are degraded? It’s not happening to me but I’m in Australia, I might be hitting a different data centre.

is it possible to download the last activity still & movie using HA? Or hasn’t that function made it into the integration?

Neither of those methods were ever in the integration, though the API wrapper does support that so it could be added.

You can grab still or recordings from the live stream though: Removed integration - Home Assistant

Oh my bad, I thought the legacy version did support that. I am grabbing video using the logi_circle.livestream record service but the action is usually over by the time it starts recording hence I was looking to download the events from Logitech as they seem more accurate.

It would be great if we could access the get_latest_activity function from HA.

Many thanks!

After several reboots and restarts today, it has started working again in the afternoon. Maybe the issue was on logi’s side. Who knows.

I got this to work by calling the API wrapper with an external python script invoked with shell_command from HA. Not as graceful as you have to do the oauth authentication again (from the HA docker container cli in my case) but does the job. I can write up how exactly if anyone’s interested.

Thanks again!

Yepp, please go ahead :slight_smile:

Ok here goes,

This will download a still image and movie file of the last activity, I’m not sure how it would work with multiple camera’s as I only have one.

The below is assuming home assistant is running under docker but can easily be modified for a non docker environment.

You may have to install the logi-circle api wrapper (the docker container comes with it preinstalled).

pip install logi-circle

Create the following directory:

/homeassistant/scripts

Fill in your login credentials, point to your oauth token and snapshot path then save the following code to:

/homeassistant/scripts/logi_circle_download_latest_activity.py
import asyncio
from logi_circle import LogiCircle
from datetime import datetime

date = datetime.now().strftime("%d-%m-%Y_%I-%M-%S_%p")

logi = LogiCircle(client_id='your-client-id',
                  client_secret='your-client-secret',
                  redirect_uri='https://your-redirect-uri',
                  api_key='your-api-key',
                  cache_file='/config/.logi_cache.pickle')

if not logi.authorized:
    print('Navigate to %s and enter the authorization code passed back to your redirect URI' % (logi.authorize_url))
    code = input('Code: ')

    async def authorize():
        await logi.authorize(code)
        await logi.close()

    asyncio.get_event_loop().run_until_complete(authorize())

async def get_latest_activity():
    for camera in await logi.cameras:
            last_activity = await camera.get_last_activity()
            if last_activity:
                # Get activity as image
                await last_activity.download_jpeg(filename='/config/your_snapshot_path/%s_%s.jpg'  % (camera.name, date))
                # Get activity as video
                await last_activity.download_mp4(filename='/config/your_snapshot_path/%s_%s.mp4' % (camera.name, date))
    await logi.close()

asyncio.get_event_loop().run_until_complete(get_latest_activity())

You’ll need to whitelist the snapshot path in your configuration.yaml

To expose the python script as a service in home assistant add the following to your configuration.yaml:

shell_command:
  logi_circle_download_latest_activity: /usr/local/bin/python3.7 /config/scripts/logi_circle_download_latest_activity.py

Restart home assisatant, now you should find shell_command.logi_circle_download_latest_activity under services and can be used in your automations.

My basic automation:

- id: '1569533320279'
  alias: logi download latest activity
  trigger:
  - entity_id: sensor.YOUR_CAM_last_activity
    platform: state
  condition: []
  action:
  - service: shell_command.logi_circle_download_latest_activity
  - delay: 00:03:00
  - data: {}
    service: shell_command.rclone_snapshots
  - delay: 00:01:00
  - data:
      message: Logi Circle Snapshots Updated
    service: notify.ios_iphone

Thank you! I will give it a try this week to check if this can be implemented in Hassio environment.

Are camera.turn_on and camera.turn_off implemented now? Is the “private” logitech API public (yes, I know that it’s still private in the sense that one has to request credentials)? Is there documentation somewhere? Been watching this page for a while and progress is very slow.

FYI Evan, the stream latency is not as high for me as you are reporting.

Is the “private” logitech API public (yes, I know that it’s still private in the sense that one has to request credentials)? Is there documentation somewhere?

As of few months ago, yes: https://developers.logitech.com/circle
Most of the capabilities there are implemented in the Logi python wrapper I wrote for this integration: GitHub - evanjd/python-logi-circle: Python 3.7+ API for Logi Circle cameras

FYI Evan, the stream latency is not as high for me as you are reporting.

The stream latency I am referring to is how long it takes for HA to return the first frame of the *proxied* camera feed to the browser. If you’re measuring from the app, that is a different workload and a different stream API.

Also, battery powered cameras in a low power state can take up to 45s to wake up and return a stream. I need to support these as well.

On that, I tracked down where the 10s timeout was and got approval on the HA discord from baloob and the stream component maintainer to make it configurable. I haven’t started implementing that change yet.

Been watching this page for a while and progress is very slow.

I’ve gotten suddenly very low on free time this year. I would gladly accept help.

Note that I have implemented everything (live stream support, push support, motion, binary sensor, etc) in this PR from Jan this year: Logi Circle public API refactor and config flow by evanjd · Pull Request #20179 · home-assistant/core · GitHub. It got knocked back for being too large (which is fair enough), and since then I’ve been slowly extracting discrete bits of functionality to submit in smaller PRs. So all the functionality is there, it just requires time and work to split into separate PRs and update relevant parts to match updated HA APIs and code standards

To date, I’ve extracted and raised PRs for:

a) Public API support + config flow (merged)
Which put everyone through hell trying to get API keys, but I digress…

b) Live stream support (blocked)
Works, but couldn’t be merged in any form due to MJPEG streams being deprecated + the new stream component having a timeout that is almost always tripped when connecting to Logi streams. When the timeout is updated, I have a PR ready to go.

The bits that are left are:

a) Update HA core to resolve 10s timeout
I’ve identified what needs to be updated, just need to do the work:

b) Push support

c) Binary sensor
Depends on push support as some of the sensors are only available on the websocket. This should be very trivial to copy from my other PR once push support is merged.

If someone wants to work with me on this, ping me, I’d love the help. Otherwise I will continue to chip away at this.

1 Like

Are camera.turn_on and camera.turn_off implemented now?

The fix for this was merged, should be in the next release of HA.

Thanks! turn_on/turn_off now work correctly, but it still seems there are some state/sensor problems. HA reports my camera as “idle” (not “streaming”) regardless of whether the camera is on or off.

This is great to hear!

Questions:

  1. Where is your dev branch for the logi_circle component? (Where I might find the integration with the streaming component that doesn’t work b/c of the timeout.)

  2. Configurable timeout in the stream component: Cool, that seems right. I have not done any HA core PRs, so I suspect my overhead is really high on this, but I will try it out locally. (Have you tested it locally with the timeout increased?)

Thanks!

It’s possible I am misunderstanding the semantics of idle/streaming/recording/off for cameras. So perhaps a better question: Is there a way to ascertain whether the camera is on/off (where on = streaming video to the cloud, off = not streaming)?

[Edit: And never mind again, because I see that the streaming_mode sensor now functions correctly (in the HA dev branch). Thanks!]

Where is your dev branch for the logi_circle component? (Where I might find the integration with the streaming component that doesn’t work b/c of the timeout.)

For the camera stream, it’s here.

If you want to use the MJPEG stream to use locally (which works reliably, but can’t be merged due to MJPEG streams being deprecated), you can find that here.

(Have you tested it locally with the timeout increased?)

Yep, I lifted it to 60s and the stream connected successfully every time. For wired cams, it usually connected within 15s, and for battery, anywhere between 30-50s.

HA reports my camera as “idle” (not “streaming”) regardless of whether the camera is on or off.

“Idle” is just what HA reports if the entity’s is_streaming and is_recording props aren’t set.

It would be trivial to add that if it would be useful to you.

Clearly it makes sense for the stream timeout to be configurable, so it’s great if this component drives that change. But as one data point, I just wanted to mention that I was able to drop the logi-circle-stream branch in place without modifying the HA timeout. I have a wired camera and the latency to initiate the stream is probably geographically sensitive. I am actually getting 2-5s to initiate a stream (which is much faster than Google Assistant was doing it last time I checked), even with 4 streams going at once.

I wish there was anything like this camera with a local rtsp server. The hardware/video quality/field of view is so good compared to anything else I know about. There are cameras with similar video quality (e.g., from Amcrest) but they are such eye sores compared to the circle 2.

Hi again Evan,

Another question: The JPG snapshot caching is coming from Logitech, right? (Looking at the component code, that appears to be the case, as I don’t see any caching mechanism there.)

While I now have an almost-live camera view in lovelace (5-10s latency @ 1080p), my image classifier is still pulling snapshots from the JPG interface, and that doesn’t refresh fast enough. So for better or worse, it looks like my best option is going to be write a helper app that keeps a stream open all the time and pulls snapshots off the stream.

Just read that newer Circle 2 cameras running firmware 5.8.x and later support real-time data over UDP/TCSP. Unfortunately I don’t have this firmware yet, and it doesn’t look like I can manually force an update (supposedly logitech rolls out their firmware slowly by serial number). But this means that eventually there will be a way to get everything I want here.

Yes. The caching behaviour is described at a high-level here: Removed integration - Home Assistant

The integration used to snip the first frame off the live stream to generate the snapshot (when the integration hit their private API), but the public API warns against doing this - so be warned that if you implement this yourself they may limit you or remove your access to the live stream.

Just read that newer Circle 2 cameras running firmware 5.8.x and later support real-time data over UDP/TCSP

Yep, Logi gave me a heads up about this. They mentioned it would only be available over their WebRTC interface, and currently the API wrapper and integration use the RTSP interface.

I did briefly experiment with decoding the WebRTC stream with FFMPEG but wasn’t able to work out the correct incantation to get it working. So the effort to implement may be non-trivial.

Sorry, learning the HA architecture on the fly here in between other things.

As far as I understand, the stream platform should open the stream and keep it open, and when it is open, snapshots should be pulled from there. Right now, every instance of the front end I open up initiates a new rtsps stream. Is that the correct behavior?

Not by default. Simply opening the front-end should just serve the JPEG snapshot to the browser, the RTSP stream is initiated when you click on the camera.

You can set up the picture lovelace cards to serve the live stream instead of the cached thumbnail, that may be what you’re referring to?

And if you’re asking that a new RTSP stream is initiated any time the camera is opened - yes, that’s expected behaviour and how every other camera integration behaves as far as I know.