Blink camera -> RTSP bridge (for Frigate, etc.)

Great idea!!
I will be interested in this project as probably a lot of people. For now, and until the Blink connection was working, I was just storing the videos an did some test with LLMVision, but I miss some time to play more with that.
I will keep an eye as soon as you can share it.

Ok. First, I have a shell command which creates folders like YYYY/MM/DD inside my media folder, and a group containing the person entities for whom notifications should be generated.

shell_command:
  create_camera_folder: mkdir -p "/media/cameras/videos/{{ now().strftime('%Y/%m/%d') }}"
group:
  camera_notifications:
    name: Camera Notifications
    entities:
      - person.me
      - person.somebody

Then, I have an automation, which continually checks to see if the Blink integration has found any new motion event videos

alias: Check Blink Cameras
triggers:
  - hours: "*"
    minutes: /5
    trigger: time_pattern
conditions: []
actions:
  - action: shell_command.create_camera_folder
    metadata: {}
    data: {}
    response_variable: create_camera_folder_response
  - action: blink.save_recent_clips
    target:
      entity_id: |-
        {{
          states.camera |
          selectattr('attributes.brand', '==', 'Blink') |
          map(attribute='entity_id') |
          list
        }}
    data:
      file_path: "{{ '/media/cameras/videos/' + now().strftime('%Y/%m/%d') }}"
mode: single

To watch for changes in that folder, I have a “folder watcher” which monitors that folder for all new *.mp4 files.

Now, here’s the complicated one. This is the automation that receives the video file created events and sends them to the AI, gets the descriptions and sends notifications. There is still probably lots of areas that could be cleaned up here, but it’s working (when the integration works) so I’m not going to mess with it unless something breaks :slight_smile:

alias: Motion Notification with AI Description
description: >-
  Sends a notification with the latest video and AI description when motion is
  detected from a camera.
triggers:
  - id: new_motion
    trigger: state
    entity_id:
      - event.security_camera_folder_watcher
    attribute: event_type
    to: closed
conditions: []
actions:
  - variables:
      response: ""
      video: ""
      video_file_path: |-
        {% if trigger is defined and
          trigger.to_state is defined -%}
          {{ trigger.to_state.attributes.path }}
        {% else -%}
          {{ state_attr('event.security_camera_folder_watcher', 'path') }}
        {% endif -%}
      video_file_name: "{{ video_file_path.split('/') | last }}"
      serial: "{{ video_file_name[16:-4] | upper }}"
      camera_entity_id: |-
        {% set cams = states.camera
          | selectattr('attributes.brand', '==', 'Blink')
          | list -%}
        {% set cam_names = cams
          | map(attribute='attributes.friendly_name')
          | map('upper')
          | map('replace', 'CAMERA', '')
          | map('replace', '_', '')
          | map('replace', ' ', '')
          | list -%}
        {% set i = cam_names.index(serial) | default(-1) -%}
        {{ cams[i].entity_id }}
      camera_name: |-
        {% set tmp = state_attr(camera_entity_id, 'friendly_name')
          | default('Unknown camera', true) -%}
        {{ iif((tmp | lower).endswith(' camera'),
          tmp[:-7], tmp) }}
      datetime: |-
        {{
          video_file_name[:4] + '-' + 
          video_file_name[4:6] + '-' + 
          video_file_name[6:8] + '-' +
          video_file_name[9:11] + '-' +
          video_file_name[11:13] + '-' +
          video_file_name[13:15]
        }}
      footage_time: "{{ strptime(datetime, '%Y-%m-%d-%H-%M-%S').strftime('%b %-d, %-H:%M') }}"
      title: "{{ camera_name + ': ' + footage_time }}"
      notifiable_people: |-
        {{
          expand('group.camera_notifications')
            | map(attribute='attributes.id')
            | map('lower')
            | list
        }}

  - alias: Use LLMVision
    sequence:
      - action: llmvision.video_analyzer
        metadata: {}
        data:
          provider: <Your AI provider ID>
          include_filename: true
          max_frames: 5
          max_tokens: 1000
          temperature: 0.05
          generate_title: true
          expose_images: true
          remember: false
          video_file: "{{ video_file_path }}"
          message: >-
            You are the chief of security overseeing a group of guards watching
            over a single-family home. The guards have several vantage points
            from which they can observe events in the front yard, driveway,
            doorstep, and  back yard. Your job is to inform the family of events
            that might impact the security of their home, events that might
            require the attention of one of the residents, or events that might
            be otherwise interesting. 


            Ignore inconsequential things like the sun moving or reflecting off
            the clouds. Do not make assumptions about the gender of any people,
            but accurately describe their appearance. Structure the description
            of events as a single flowing narrative. Do not include introductory
            text. Do not tell me what frame you are analyzing, just talk about
            what is happening.


            Avoid speculation.


            Your response should be a JSON object containing the following
            fields:
              importance: the judged importance 
                of the event. If something happens that
                requires someone's attention, use 'high'. 
                If something otherwise interesting happens,
                use 'low'. Otherwise use 'not_important'.
              reason: the methodology used to determine
                the importance of an event. Why did you deem
                it important or not?
              description: the description of the scene.
                Try to keep this to two sentences, unless
                that is inadequate to describe the 
                situation.
              detail: a more detailed description of what
                has happened. Be as wordy as necessary to
                fully capture the moment.
        response_variable: llmvision
      - alias: Restructure response
        variables:
          ai_response:
            error: "{{ llmvision.error }}"
            data: "{{ llmvision.response_text[7:-3] }}"
            title: "{{ llmvision.title }}"
            thumbnail: "{{ llmvision.key_frame }}"
  - alias: LLM action error or success?
    choose:
      - conditions:
          - condition: template
            value_template: "{{ ai_response is not defined }}"
            alias: AI_response is not defined
        sequence:
          - alias: Define 'ai_response' variable if none returned
            variables:
              video: "{{ video_file_path }}"
              ai_response:
                conversation_id: ""
                data:
                  description: AI analysis did not provide a response
                  detail: ""
                  reason: n/a
                  importance: high
        alias: There was no AI Response
      - conditions:
          - alias: AI analysis encountered errors
            condition: template
            value_template: |-
              {{ ai_response is not defined or 
                (ai_response.error | length > 0) or 
                (ai_response.data | length == 0) }}
        sequence:
          - alias: Define 'ai_response' variable if error returned
            variables:
              video: "{{ video_file_path }}"
              ai_response:
                conversation_id: ""
                data:
                  description: AI analysis encountered errors
                  detail: ""
                  reason: n/a
                  importance: high
      - conditions:
          - condition: template
            value_template: "{{ ai_response is defined and (ai_response.error | length == 0) }}"
        sequence:
          - alias: Stop here if the AI thinks the event isn't important enough
            if:
              - alias: If the AI has deemed the event 'not_important'
                condition: template
                value_template: "{{ ai_response.data.importance == 'not_important' }}"
            then:
              - stop: Not important
        alias: AI analysis has completed successfully

  - action: llmvision.remember
    metadata: {}
    data:
      title: "{{ ai_response.title }}"
      image_path: "{{ ai_response.thumbnail.replace('/media/', '/local/') }}"
      camera_entity: "{{ camera_entity_id }}"
      start_time: >-
        {{ strptime(datetime, '%Y-%m-%d-%H-%M-%S').strftime('%Y-%m-%d %H:%M:%S') }}
      summary: "{{ ai_response.data.detail }}"
  - alias: Notify people
    repeat:
      for_each: "{{ notifiable_people }}"
      sequence:
        - alias: Send the regular motion notification
          action: notify.{{ repeat.item }}
          data:
            title: "{{ title }}"
            message: "{{ ai_response.data.description }}"
            data:
              importance: "{{ ai_response.data.importance }}"
              image: "{{ ai_response.thumbnail }}"
              video: "{{ video }}"
              priority: high
              ttl: 0
              group: motion-detected
              channel: Motion Detection
              actions:
                - action: URI
                  title: View timeline
                  uri: /lovelace-people/timeline
mode: queued
max: 10
1 Like