Hi @Colecash122, images are certianly possible but you would first need to trigger the camera to take an image and then once taken you need to fetch it. There are a couple of other things to consider:
I haven’t tested myself but requesting an image is already possible with the proper Home Assistant component.
Requesting an image takes a while (maybe 10 seconds plus). If your setup was similar to mine then this process would start when the person pressed the doorbell (rather than as soon as motion is detected in my setup).
Another option that might be worth a try is to extract an image from the video that’s taken. Have a look at this thread.
Hi @Dullage been a while… I am glad you found a camera that doesn’t run out of juice whilst connected to wifi as our discussion from your previously attempted wireless doorbell.
I was wondering with the blink camera and your python script if I could use it to provide live feed to my phone as soon as the bell is pressed or in my case motion detected as well as an image or video recording?
Hey @bachoo786, unfortunately I don’t think we’ll be able to get the live stream in Home Assistant (or in the notification). You can use the Blink app to view a live feed at any time though.
I can imagine that if there was a way to get the live stream MattTW would have spotted it and put it in his repo. I might have a quick look though just out of curiosity.
So if you capture the HTTP requests from the app and then choose to view the live stream you can see a request going out and then returning with a rtsps:// address for the live stream.
I’m not sure how to test viewing this stream outside of the app though (let alone how we could use it in Home Assistant). I very quickly tried loading it in VLC but it didn’t like it, perhaps because it’s RTSP over SSL (RTSPS) or maybe there’s more to the stream URL that is added by the app?
Needs a bit of investigation, I might have a go but haven’t really the time any day soon.
Still working on image pulling, but on your blinks does the sensor for if an individual camera is armed work? On mine it shows on even when the camera has its motion toggled off.
I just opened a PR to overhaul the Blink component which may make this integration easier (ultimately, I’d like to have a live stream as well, but have failed to come up with a solution).
Essentially, I have a wrapper service that downloads the most recently recorded Blink clip to a local file. So the process here would be (I think):
Camera detects motion
When clip is done reocrding, call the blink.blink_refresh service
Camera clip info has been updated, so now call the blink.save_video service with the camera name and file location as the service payload data
Do what you’re currently doing for notifications
I think this is a really cool application of the camera!
Hey @fronzbot, great to hear about the PR, I’ll be giving that a go!
I’d love to get the logic into HA (rather than running an external script). Your proposed steps sound great, the only thing I’m not sure on is knowing when the video has finished recording and is available for download. Is there a way to get this info with the component?
Right now there’s no way to know it’s finished recording, but I think your approach of assuming that motion was detected ~5s before the button was pressed still works fine. Here you would just call blink.blink_refresh, check if sensor.blink_<camera_name>_motion_detected is ‘True’, and if so call the blink.save_video service. You’ll still need either your existing python_script or a regular yaml-based script to do this, but all of the http request functionality should be built-in now.
-y = Overwrite any existing file. -ss 00:00:04 = Start at the 4th second of the video (in my setup the visitor is normally well in frame here). -i BlinkVideo.mp4 = The input video. -vf ‘crop=720:720:280:0’ = Crop the image to 720px X 720px starting at 280 pixels from the left and 0 pixels from the top. This leaves me with a square image that I then display in HA Dashboard. -vframes 1 = Only grab 1 frame for saving. -vcodec png = Save it as a PNG, BlinkImage.png = The output image file.
Hope this helps.
@fronzbot - Not sure if somehting similar is possible in a future component improvement? It would save having to make the extra call to update the image.
Yeah, I definitely want the ability to show a frame from the most recently recorded clip so this is great. It’ll have to be in a future PR, but that’s ok. I’d like to get this rolled into the actual blinkpy API then it’s just a matter of bumping the version used in home-assistant.
Hi @Dullage, I really like the approach with the python script and querying Blink for the latest list of videos. I tried to check how you call the script in your configuration.yaml file (or anywhere else), but I am not sure how you have made this all work. Do you call the python script from the configuration.yaml file? Any hints you could give, would be great. I am trying to do just get the latest videos to show up in a picture-glance card.
Many thanks, @Dullage, this really helps. I tried to make it work last night, but a couple of issues I encountered. The new script you kindly provided has a couple of additional “f” letters in it that needed removing (ie. line 63/64: responseData = get(f"https://rest-prde.immedia-semi.com/api/v2/videos/changed?since={since}&page=1", headers={“Host”: “rest-prde.immedia-semi.com”, “TOKEN-AUTH”: token}).json(). which is easily resolved, but one I couldn’t:
When running the script I get the following error:
"
Traceback (most recent call last):
File “blink.py”, line 68, in
videos = responseData[“videos”]
KeyError: ‘videos’
"
Any ideas?
The “f” letters are a new thing in Python 3.6 (I forget they’re so recent), likely you are running the script on an older version of Python. This is likely what is causing the other issue.
They’re easy to change back to the old style though:
Python 3.6 > f"My first value is {value1}, my second value is {value2}"
All Python Versions (I think) "My first value is {0}, my second value is {1}.format(value1, value2)"