Thanks, I modified the camera.py and now I have the entities show up!
Added a picture element card and can view last motion. What card I need to add to get the live streaming? Also I found a bunch of WARNING like below, searched this post and there are a few reporting but didn’t find how to solve it.
WARNING (MainThread) [eufy_security.camera] Unable to process parameter "1056", value "1"
Please do me/us a favor and read this thread once from the beginning, esp. the beginning as there was more or less only this integration, we are currently talking about. You have already seen, that the answer are there on the example of the loop.
The warnings and some errors are there and that’s it
You have to define scripts to start and stop the stream and some (as I) have to refresh the view.
So if you want, you can have scripts like this
kamera_garten_an:
alias: Kamera Garten an
sequence:
- service: camera.turn_on
data: {}
entity_id: camera.garten
mode: single
icon: mdi:video
kamera_garten_aus:
alias: Kamera Garten aus
sequence:
- service: camera.turn_off
data: {}
entity_id: camera.garten
mode: single
icon: mdi:video-off
If you click on this card, you will see the last motion picture in big. If you start the stream (perhaps reload the view, pull to refresh, f5, …) and click again, you will have the stream. Most probably it deactivates after some time, or you use the stop stream to save the battery afterwards.
But again, please read this thread. Everything, what I have described has been described more than once before.
The IO Borker for rest commands (changing the state of Eufy cams)
The ha-eufy-security for streams but the Add-on become oldier according to what I understood (but it works for me)
The last of brodpat would permits to do all like in the Eufy application I think, but works is in progress…
It’s just what I understood because quite newby with my Eufy’s cams.
Yeah, for the moment there 3 possibilities
the io broker like u mentioned
the ha eufy security which is old
and the addon from MaxW
But i think there’s a 4th one in making using websocket… (thatswhy i asked to explain to be sure i’m not misunderstood)
The websocket method will be based on the same library which bropat uses…
Then there could come an official addon/integration from home assistant i think, which will make all the other 3 possibilities unnecesary (again, if i’m correct)
wait for Bropat on the “4th one”. its experimental/in-dev. It should be the ultimate solution, but we should wait for him, on his own schedule to release if he’s going to
Hi. I’m about to buy a new doorbell and thinking of getting the Eufy 2k Battery one. Before I do can anyone comment on how well this integrates with HA? Any recommendations which of the integration options is best (iobroker or MQTT bridge)? Is it responsive, etc?
Thanks!
“well” is relative. There’s a good client library out there now for controlling the devices but there’s no good python implementation. It’s being worked on, so for now the options are as you say iobroker or the MQTT bridge. Both are pretty responsive IMO but they are limited by the devices being in the cloud.
is RTSP finally available on the Eufy doorbell then?
I like the resolution of the eufy, but no RTSP would be a deal breaker for me.
I want to replace my faulty Hikvision DS-KB6403-WIP Doorbell and looking for a camera that’s wired, with RTSP and a way to get the doorbell press event into HA, local storage and no subscription.
The other camera I’ve got my eyes on is potentially the Ezviz DB1.
Thoughts or advice?
I’m in the UK with a budget of up to £200
Thanks, I do like myself a bit of a unicorn for lunch
I managed pretty much everything with the Hikvision (wired (at least by power), RTSP, local storage, I capture the doorbell button press event via tasker on a spare tablet).
The issue with that doorbell is that its design renders it useless at night (the IR LED reflect on the outer plastic lens) and when it rains it’s blurry as hell.
I’m also getting too many wifi disconnections.
The Ezviz DB1 would at least provide a proper PIR for motion to better trigger person detection via TensorFlow, but I do like the eufy’s built-in person detection.
How quickly does the last motion snapshot get generated? I guess I could live with this as a drawback if I can at least catch the button press event relatively quickly…
To HASS? Pretty quick. Couple of seconds. Haven’t really ever timed it.
Edit: ACTUALLY i take that back. I think what it does is it waits until after the motion event is over and THEN generates the snapshot. I may be wrong.