Blink camera -> RTSP bridge (for Frigate, etc.)

Have a working PoC of this so I thought I’d share: GitHub - roger-/blinkbridge: Blink camera -> RTSP bridge

It’s a Docker-ized app to publish Blink motion clips to RTSP. There’s currently about a ~30s delay and the video will be static until motion is detected, but it’ll let you use Frigate, etc. with your Blink cameras.

Eventually I’ll try to integrate an ONVIF server with motion events so it can be used with Scrypted for HSV support.

3 Likes

Hi, thanks for sharing your work. I wondered if this only works with Blink cloud subscriptions? I seem to be getting a lot of errors and can only think that it is because i dont have cloud.

Hi, I’ve only tested it with cloud storage but anything that BlinkPy supports should work.

BlinkPy does indeed support local storage. I still seem to be seeing issues. I can authenticate easily enough when running blinkbridge; my creds file generates successfully. My cameras seem to then be identified but after that there seems to be an issue with no clips found. After that there is a whole lot of red exceptions. I wondered if the timezone could be an issue in the blink.py script, any ideas? Happy to load my full logs in a github issue if you are open to that.

There might be an issue related to local storage after all since I hack around some BlinkPy issues. I’d be happy to investigate – please share logs (GitHub would be better).

Hello I would be interested in trying, but I did not understand how to install everything on my HA configuration. I have the supervised version via debian

Hi,

You need to have a working Docker installation first. Do you have that already?

no, i dedicated a mini pc for a home assistant os installation

Sorry you will need a way to run Docker, or at least a Linux shell if you want to install the scripts manually.

thanks, yes I have access to Linux Shell

You can manually install ffmpeg and install the packages in the Dockerfile with pip. Then download the repo and run mediamtx and the main script with Python.

I’m trying but when I start the container it gives me this error:

Attaching to mediamtx, blinkbridge-1
mediamtx       | 2025/01/23 19:31:48 INF MediaMTX v1.11.1
mediamtx       | 2025/01/23 19:31:48 INF configuration loaded from /mediamtx.yml
mediamtx       | 2025/01/23 19:31:48 INF [RTSP] listener opened on :8554 (TCP), :8000 (UDP/RTP), :8001 (UDP/RTCP)
mediamtx       | 2025/01/23 19:31:48 INF [RTMP] listener opened on :1935
mediamtx       | 2025/01/23 19:31:48 INF [HLS] listener opened on :8888
mediamtx       | 2025/01/23 19:31:48 INF [WebRTC] listener opened on :8889 (HTTP), :8189 (ICE/UDP)
mediamtx       | 2025/01/23 19:31:48 INF [SRT] listener opened on :8890 (UDP)
blinkbridge-1  | Traceback (most recent call last):
blinkbridge-1  |   File "<frozen runpy>", line 198, in _run_module_as_main
blinkbridge-1  |   File "<frozen runpy>", line 88, in _run_code
blinkbridge-1  |   File "/app/blinkbridge/main.py", line 9, in <module>
blinkbridge-1  |     from blinkbridge.stream_server import StreamServer
blinkbridge-1  |   File "/app/blinkbridge/stream_server.py", line 8, in <module>
blinkbridge-1  |     from blinkbridge.config import *
blinkbridge-1  |   File "/app/blinkbridge/config.py", line 38, in <module>
blinkbridge-1  |     load_config_file(config_file)
blinkbridge-1  |     ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
blinkbridge-1  |   File "/app/blinkbridge/config.py", line 27, in load_config_file
blinkbridge-1  |     with open(file_name) as f:
blinkbridge-1  |          ~~~~^^^^^^^^^^^
blinkbridge-1  | FileNotFoundError: [Errno 2] No such file or directory: '/mnt/data/supervisor/share/blinkbridge/config/config.json'
blinkbridge-1 exited with code 1

can’t find the .json file. How come, the file is located in that folder

I assume you’re using Docker now? Make sure you copied the config json file to the right location.

I installed docker-compose and started the Dockerfile, then I edit the compose.yaml file and started with docker-compose by changing the file paths. but I think that HA OS blocks the reading of the file. what can I check?

Are you sure you had the config file in the same location specified in compose.yaml?

I’m also not sure if running external Docker containers is support under HA OS.

Could this work with a folder hierarchy of video files rather than with Blink directly? I’m already using an automation to download my Blink camera motion-detection video clips to a $/Media/Cameras/YYYY/MM/DD"folder structure.

Not as-is, but it wouldn’t be too hard to modify the code for that use case.

I ended up just using the individual videos with the “LLM Vision” integration. It’s working great!

Can you share what you have exacly done? I’m quite interested.

Currently, I’m sure you know, that the Blink integration is broken because they’ve changed their authentication system. When the integration is fixed, hopefully this works again.

I’ll have to edit my automation for public consumption. I wrote it to take advantage of some quirks of the naming convention I have for my entities so that they match up with associated entities. There are some disabled testing steps I was trying out, etc. Basically though here’s how it goes.

  • Blink integration has an automation that runs every 5 minutes (timepattern: "/5") to check if there are any new motion videos (there may be 1 or more), and downloads them into a folder. (Interestingly, the Blink integration seems to know when new motion events are available, so I don’t think polling in the automation should be strictly necessary, but for now, it is what I had to do.)
  • I have a folder watcher to monitor that folder and fire an event for each new file that is written into the folder.
  • The automation that uses LLMVision is triggered by these events in the queued mode so that I don’t miss any. It takes the video, chops up the path (because Blink integration writes them with a predetermined file name like 20251010_174314_FrontDoor.mp4 or YYYYMMdd_HHmmss_BlinkCameraName.mp4
    • I made sure to name my camera entities to match the names of the cameras I defined in the Blink app, so that I can map the camera names from the filenames to my entities.
    • I send that video file to LLMVision, which does some magic with ffmpeg under the covers, and only sends a series of images to the AI for anaysis. This step actually saves a bit of money on the AI compared to sending the video directly.
    • Most importantly, since Blink’s settings require all the cameras to be armed for motion sending or none of them, I ask the AI to tell me whether it thinks I would want to be notified about this event. If it says ‘no’, the automation stops. I can still go and watch the video if I want, but it stops the majority of videos that contain nothin but random cars driving on the street or sunbeams directly hitting the camera.
    • LLMVision also gives me back an AI description of the frames that it was sent, and I package that up into a mobile app notification for my wife and I.

The notifications I get from this automation are 1000x more useful than the ones Blink gives me (without a subscription, that is), and using Google Gemini as the backing LLM, it’s only costing me about 30 cents a month. A MONTH!!! I consider that a huge win.

When the AI Tasks became available, I tried using them directly, but sending the video without having LLMVision extracting the images first turns out to be more expensive. In my experience, about 10x more expensive. That’s still only $3 or $4 per month, but why bother when it doesn’t get you anything appreciably better?

I’ll try and get my automation YAML cleaned up a bit before I post it here.