Local realtime person detection for RTSP cameras

frigate.app                    ERROR   : Error parsing config: extra keys not allowed @ data['cameras']['back']['ffmpeg']['input']

testing frigate NVR beta using my non beta confguration . any new configuration for camera in beta. this is my conf for non beta

cameras:
  back:
    ffmpeg:
      input: 'rtsp://user:[email protected]:554/Streaming/Channels/102'
    objects:
      track:
        - person
        - cat
      filters:
        person:
          min_area: 1000
          max_area: 100000
          min_score: 0.4
          threshold: 0.7

@kucau yes it is a very different config, I recommend carefully reading through the new documentation and going line by line. As a fairly new user to even the old config, there were a few things that tripped me up.

You can clear the global_args we added for that input earlier and they will go away. All ffmpeg args show up as ERROR severity.

1 Like

tq. will redo all conf and read line by line

Good Lord - I just read through the 0.8.0 readme. This is amazing. I canā€™t wait to dive in.

2 Likes

Awesome, this is looking more and more beautiful. Now I can finally dump Blue Iris which surprisingly has been creeping higher and higher on CPU/GPU usage over the years.

Have a few more questions if you dont mind:

Should I be seeing more mqtt topics? I see motion/available = online (I renamed ā€˜frigateā€™ to ā€˜motionā€™). Nothing else at the moment. Maybe when there is motion?

Does bitrate effect anything? My substream for detect is currently at CBR 512kbps, but what about VBR or a higher bitrate? Will that cause more overhead?

Right now I have the date/time/camera name being displayed by my cameras. On other software, having it do text overlay causes considerable overhead as it has to re-encode. Does frigate not suffer from that problem? If not, can I put in a feature request for being able to add the camera name, set the position of that text, and also customize the date time string? Right now my camera has AM/PM but also the day of the week which I like ā€˜2020-12-06 09:39PM Sunā€™.

On a side note, right now with 6 cameras on an Intel i7, Frigate is using ~50% CPU on a single core for motion detection, clips and rtmp. I havenā€™t even setup my masks yet. The best I was ever able to get with Blue Iris was 65% on 2 cores (130% CPU). And this allows me to free up my Intel 915 for Plex transcoding!

Thank you.
Container is running now.

Logs say it is running on http://0.0.0.0:5000/ but when I try to open localhost:5000 in the browser nothing loads.

But under the container list it has nothing under published ports.
mqtt frigate is online though

Frigate is efficient because it writes the recordings directly from the camera without modifying. Any modification will introduce significant overhead because the video has to be decoded and reencoded. You could modify the ffmpeg parameters for the record stream to modify the video using ffmpeg features, but that will introduce significant overhead.

Frigate is running. Alive and healthy!

:slightly_smiling_face: :slightly_smiling_face:

Added the host to container port 5000:5000 and redeployed.

ā€¦but when I open the camera on the browser to view I get the image split in the middle with 2 mirrored copies and it rotates from top to bottom like a roll of film.

The new features looks amazing Blake!

Will see if I can test it out over the weekend.

Any chance you can use the multiple streams (in the future) to

  1. analyze a low-res stream for motion detection and first round detection with a lower threshold
  2. process a frame from the higher resolution stream with a higher threshold to look for potential persons (and thereby reduce false positives)?

I can imagine mapping the area where the motion occurred from the lower resolution frame to the higher resolution frame could be a problem.

A humble questionā€¦

So no more binary_sensors in 0.8.0?

Easy fix creating a template sensor based on the number of people in the new sensor.xxx. Has the binary_sensor been removed by design?

That would be possible if necessary after improving to the model.

They were removed by design. I updated my automations to look for sensor values > 0.

Ok, thanks for the info.

I solved it this way, so that I do not have to change my automations.
I edited old mqtt the binary_sensor to a template sensor.

binary_sensor:
  - platform: template
      sensors:
        cam1_framsidan_frigate_person:
          friendly_name: "Cam1 Framsidan Frigate person"
          device_class: motion
          value_template: >-
             {{ states('sensor.cam1_framsidan_person')|float > 0 }}
        cam2_baksidan_frigate_person:
          friendly_name: "Cam2 Baksidan Frigate person"
          device_class: motion
          value_template: >-
             {{ states('sensor.cam2_baksidan_person')|float > 0 }}
2 Likes

Been putting this off for a while, but the feedback looks good, so going to have a go.

Do you have any guidance on using a GPU? Is tensorflow compiled at time of docker build?
Iā€™m running on Unraid with a 1080 in the machine just for hardware accel and object detection
(and iā€™ve had a nightmare trying to compile FFMPEG for cuvid/cuda lol)

EDIT: I can surface the GPU in the docker with the following
Extra Arguments: --runtime=nvidia
variable: NVIDIA_DRIVER_CAPABILITIES=all
variable: NVIDIA_VISIBLE_DEVICES={GPU_DEVICE_ID}

Finally got my feeds working.

What would be the min/max size for a large cat in the settings?
Anyone got a good working example?

It depends on how far away the cat appears in your camera. This will all be dependent on each specific camera.

Relatedly, I seem to be detecting cat as person. They get detected, but it thinks theyā€™re people.

1 Like

There is no support for using the GPU for tensorflow. It can be used for decoding with ffmpeg only.

Nothing new here, my cats always think they are people too.

2 Likes

bummer :frowning: any plans in the future?

If I recompiled OpenCV with GPU support - would that work? Iā€™m not sure how deep the GPU requirement rabbit hole goes?
It looks like tensorflow now supports both GPU and CPU
Iā€™d probably also have to update the models