Camera with rtmp stream

Hi can some one shed some light on this…

I’ve got a local stream server that is outputting a video via rtmp protocol. The video is being generated using ffmpeg and has this parameter:

-vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:text='%{localtime\:%X}': fontcolor=white@1: x=20: y=20"

I can see the time using my VLC player BUT not inside the HA lovelace interface:
https://monosnap.com/file/eNLOMlnW9WEIVhrLJnnGSBk23UQoFj

I’m sooo puzzled to why that is?

p.s. i’m using hasbian and not hass.io

Without seeing the appropriate yaml config, we won’t really be able to tell you why

Nothing out of ordinary:

camera:
  - platform: generic
    name: wtf
    still_image_url: https://nexusapi-us1.camera.home.nest.com/get_image?uuid=........&width=1600&public=.......
    stream_source: "rtmp://192.168.1.49:1935/live/test2"
1 Like

It’s a problem with the rtmp stream then, where exactly did you add this vf parameter you’re talking about?

The -vf is a part of ffmpeg that’s running on a different server. BUT, like I said, the vlc is showing the time and is reading the same exact stream (you can see it in the image link). I opened the same stream side by side (you can see both of them in the image). VLC has time and HA one does not.

1 Like

I’m stuggling to get a ffmpeg shell_command with a drawtext overlay working too. Did you ever figure this out?

In your instance, I wonder if HA can’t access the location of that font file? Although in my case I’ve tried moving the font files to accesible locations and still no cigar.

I also understand that the drawtext is sensitive to : , " and chars so you could try tinkering with that?

This is the closest thread I could find on this topic. I’m experiencing something similar, but not quite the same.

I’m overlaying the date and time, which works fine when viewing the camera on Lovelace (a picture glance card which updates every 10s). It also works when taking a snapshot. However, when viewing the live stream or recording a video, the overlaid text isn’t present. This seems odd, because this is an IR-sensitive camera, so the “true” colour has a purple hue to it, hence the hue=s=0 filter I’m applying. This filter, however, is applied in all cases. This is important, since I thought maybe the record service is using the raw stream, but that doesn’t seem to be the case.

camera:
  - platform: ffmpeg
    name: security_camera
    input: -rtsp_transport tcp -i rtsp://securitypi.local:8554/unicast
    extra_arguments: >-
      -vf "hue=s=0, drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: text='%{localtime\:%Y-%m-%d %T}': [email protected]: x=10: y=10"

script:
  security_camera_record_clip:
    mode: queued
    sequence:
      - service: camera.record
        data:
          entity_id: camera.security_camera
          filename: '/tmp/camera.security_camera_{{ now().strftime("%Y%m%d-%H%M%S") }}.mp4'
          duration: 10
  security_camera_create_snapshot:
    mode: queued
    sequence:
      - service: camera.snapshot
        data:
          entity_id: camera.security_camera
          filename: '/tmp/camera.security_camera_{{ now().strftime("%Y%m%d-%H%M%S") }}.jpg'

EDIT: Ok, one mystery solved. I removed the hue filter in this config and saw it’s still a greyscale stream, so then I remembered that I’ve actually set this up to apply the filter on the actual device (a Raspberry PI) in the v4l2rtspserver.service config via v4l2-ctl. Doesn’t change the fact that drawtext is only applied on the snapshots, but not otherwise.