I use the telegram service for notifications and want to send the snapshot/thumbnail as an attachment. This works using the snapshot approach, but the snapshot is low resolution and of course a bit delayed compared to the detection event. Is there a way to either get the snapshot at full resolution or retrieve the unifi thumbnail to a JPG file similar to the snapshot service?
My current automation:
alias: Person ved indgangen
description: ""
trigger:
- platform: state
entity_id:
- binary_sensor.g4_indgang_person_detected
to: "on"
condition: []
action:
- service: camera.snapshot
data:
filename: /config/www/snapshots/snapshot_indgang.jpg
target:
entity_id: camera.g4_indgang_high
- service: notify.jaybebot
data:
message: Person detected
data:
photo:
- file: /config/www/snapshots/snapshot_indgang.jpg
caption: "Person detected!"
mode: single
Yes, notification services are able to sign media URLs. What Iām looking for is how to pull the image bytes from the media URL in an automation.
@AngellusMortisstated: āThere needs to be another generic way to expose media for an entity without access to the Python code for automations, add-ons (Node Red) etc. The recent work and optimization to the recorder has made that really easy: expose them as attributes and then exclude them from recorder. That allows us to combine the way the camera platform exposes entity_picture with the new async_sign_path to add media an attribute.ā
I was hoping/expecting to be able to pull the signed path from the trigger entity, but Iām not seeing that implemented.
Got a very strange issue yesterday that maybe someone have an idea about?
I changed the DNS of my phone to my Home Assistant NUC where i have AdGuard running. Immediately when I did so one of my 5 Unifi cameras disconnected with the message āLost powerā. Changing back DNS solved the issue. All other cameras kept working fine.
Camera kept recording but was unaccessible from other devices too.
Per the thread and installation process first configuration should be easy but I can;t even login.
I met all prerequisites, I have UCKP with production OS 3.0.17.
In Local portal, admins section (now users are created there) I created non remote user.
I can login with local user account but configuring that in HA does not work. I have response wrong authentication. Any thoughts?
I have UNVR and used to have UDMP. It seemed I had better luck catching the object using snapshot with UDMP than UNVR so I am not sure if this is the case for everyone. However, I have recently come to realize that the snapshots are created every 5 second or so. I tested this buy repeatedly calling the snapshot service and realized the timestamp on the snapshot is not updated.
My conclusion is that when you call the snapshot service, it is not taken in real time but whatever snapshot created most recently is sent to HA. If you have a detection that happens after the snapshot was created but before the next one was taken, you are going to end up with a picture without any object in it.
The proper way to get the actual snapshot is documented under the View section in UniFi Protect - Home Assistant. I automate using NodeRed so I was able to see the event id but not the unvr id. Even though the documentation says itās the same as in config entry, it does not work when i simply type the name in (for me was unvr) but it turned out to be a 32 character long id. If you look at the post just above your from z11, it gives you an idea how to generate the link to get snapshot sent based on event, not the snapshot created every 5 seconds.
as such, for the photo path, it has to be in the form of ā/api/unifiprotect/thumbnail/{{ nvr_id }}/{{ event_id }}ā It still does not solve the resolution issue, but I think at least it will be the actual event detection snapshot.
Thanks. I would like to get the thumbnail as an image file so I can attach it to telegram and not need to expose access to the unifi api to the internet. Does anyone have a working automation similar to the one I shared above using the snapshot service?
I see. In my case when the snap shot does not contain the detected object itās generally taken in an earlier time. This gives an impression that the snap shot was delayed. I think you should pass along a time stamp in notification and figure out if the snapshot is taken before or after the time stamp (comparing to the unifi time stamp).
Within the current constraints of the system, perhaps passing two snapshots 5 seconds apart or add a slight delay before snapshot is fired to increase the chance that snapshot occur after detection. For mine I noticed it happens every 1 and 6 second, so maybe able to add some time condition too.
I am wondering what about passing an rtsp stream to create a camera entity in home assistant and take snapshot of the rtsp stream instead. That way you could bypass the unifi api and just use home assistant?
I am just curious whether this non real time snap shot is a constraint of unifi or home assistant.
My automation above uses the ānormalā snapshot service from the stream and does not get the thumbnail from the unifi event - I would like to get that, though, as it should be taken at the exact point in time where the unifi event triggers. Anyway - it kind of works, so I havenāt spent too much time on improvingā¦
I do not understand why my snapshots are low resolution though!
This plugin is awesome! I only seem to have one minor issueā¦ for some reason when I open Homekit and view the Unifi Protect camera stream over there it doesnāt render the livestream (it does render the preview and works in home assistant ootb). Does anyone have a clue what I misconfigured here?
Is there a way to get ID of the latest event for a camera? or indicate that I want to get the most recent one?
Another question. When I do a camera snapshot, it seems that it doesnāt give me a snapshot taken at the moment I call for it, but gets something that was taken several seconds ago. Is there a way to get a current snapshot?
When the camera recording mode is set to āNeverā the motion sensor is āunavailableā, but when set to either āAlwaysā or āDetectionsā, they appear as available.
I use the inbuilt motion sensors for other automations and was wondering if there is a workaround? (besides using another form of sensor)
There appears to be an obvious lag (~ +14 seconds) between the real time stream and what appears within HA. All my cameras are ethernet based. The āPre-loadā stream is enabled.
Can someone advise on how to pull Smart Detections thumbnails from media into automations? Not upon event but for letās say last 24hrs? I can see them in media but have not found any ways to access the images form automations.
Hey guys, I am having a terrible time getting the G4 Doorbell Bro integrated into HomeKit. No matter what I do I cannot get the video to show up there. I have it in HA. I have it exposed to HomeKit, I have tried the setting about the H.264 setting. I have enabled RTSP in the UI Settings. Nothing seems to work?
Big thank you. Very useful integration. I love the notification feature of protect when I am away. It shows me instantly the cam and I can view the clip when someone approaches my house. Is there a way to turn off/on notification ā¦ e.g. when at home turn it off ā¦ when away ā¦ turn it on via an HA automation?
When i create a picture component for my cameras and click āLiveā View. Cameras load up fine and work fine. However, after a random amount of time. The feed freezes and i get an error āStream Never Startedā. What could be causing this and is there any fix?
If i put the component in āAutoā Mode and then only view the cameras by clicking on them i do not seem to have this issue (i have just enabled it so maybe i will just going to leave it for a bit and will let you know).