Local realtime person detection for RTSP cameras

OK!

I installed Frigate on a Dell Optiplex 9020m with a Coral A+E M2 card in the wifi slot,
as @tinglis1 said he had tried. The Dell is running Linux Mint 20.1 sfce and Frigate is running in Docker.

I connected 1 RSTP camera (Foscam FI9804W) to frigate. The camera is also still connected to MotionEye running on Home Assistant on a “Blue”. Frigate is connected to a 320x280 stream and MotionEye is connected to a 1280x720 stream. Disabling the MotionEye stream doesn’t seem to help.

On the Dell, I can open the Frigate web interface. Sometimes it will display the image from the camera, sometimes not. This could be because the camera is still feeding images to Motioneys (?)

Since the Home Assistant Custom repository instructions on youtube didn’t work (“Invalid Add-on repository!”), I installed HACS and installed Frigate from there.

From Configuration->Integrations, I added Frigate.
There is no “configure” button in the integrations window for Frigate.
In the “three dots” menu, there is a “System Options” selection. This gives me a popup to enable automate adding of new entities. I enabled it.

media_source is enabled.

The Installation page in the documentation says to

Create the file frigate.yml in your config directory with your detailed Frigate configuration

Should this be the same configuration file on the Frigate install on Docker?

There is no entry in Supervisor->Dashboard->Installed add-ons.

A Frigate entry appeared in the Media Browser. Frigate->Clips produces a Media Browsing Error “Unknown error”.

There is no entry in the sidebar for Frigate.

The log summary shows:

Error fetching frigate data:
1:28:05 PM – (ERROR) Frigate (custom integration) - message first occurred at 12:10:51 PM and shows up 62 times
Error fetching information from http://192.168.1.229:5000/api/stats - 502, message='Bad Gateway', url=URL('http://192.168.1.229:5000/api/stats')
1:28:05 PM – (ERROR) Frigate (custom integration) - message first occurred at 12:10:51 PM and shows up 62 times
Config entry 'Frigate' for frigate integration not ready yet: None; Retrying in background
12:10:51 PM – (WARNING) config_entries.py - message first occurred at 12:10:51 PM and shows up 2 times

I must have missed something…

Any Idea what?

Thanks,
-Mike

try Debian

Is that just a snark, or is there some actual reason Debian might work better?

Okay I’ve spent time on this as well… so far I haven’t been able to run hardware acceleration with the RPi4 Debian Buster builds. It seems those builds don’t seem to find the device (/dev/video11… the decoder?) ffmpeg needs for hwa (…kernel?).

[h264_v4l2m2m @ 0x5590337990] Could not find a valid device
[h264_v4l2m2m @ 0x5590337990] can’t configure decoder

So unfortunately an “official” RPi4 HA Supervised install is a no go… that is if you want hwa/frigate on a Rpi4.

I got 64bit & hwa working with a 2nd Rpi4 I had (running on Ubuntu 20.04) with Frigate in a Docker container… in Ubuntu the required devices /device/video10 etc show up.

Running Frigate on docker/Ubuntu, Coral USB, running HA on docker/Ubuntu plus the Frigate integration. Quad-core Celeron NUC. 2 x Reolink cameras, not very much activity, and areas masked off to reduce movement even more.

Noticed this strange(?) pattern in Grafana for Frigate docker CPU use - is this normal? Every 2 - 3 mins the CPU spikes to around 80% before coming down to around 20% again.

It’s not causing me any issues at the moment, but just wondered if it was a symptom of something I’d screwed up.

What are you using to get your metrics into Grafana? I’m running Elastic at home and on a Raspberry Pi4 8gb running Kubernetes with Frigate a container running in a pod. It also has a Google Coral USB TPU plugged in and detected. It may be the resolution of the Metricbeat or a difference in measurements with what you are watching, but with 3 cameras (480p capture), this is what I see in a 15m interval:

5m interval:

Based on the metrics, I’m getting data every 10s or so. I don’t recall setting any CPU or RAM limits for this pods either.

I’m trying to rotate one of my camera images to reflect that camera setup in “portrait” orientation…

Trying to follow the example in this post:

The values used are

      output_args:
        - -vf
        - mpdecimate,rotate=PI/2
        - -f
        - rawvideo
        - -pix_fmt
        - rgb24

From my config file I see these value against the camera I wish to rotate:

-r 5 -f rawvideo -pix_fmt yuv420p

I have added these lines to my camera config:

output_args:
        - -r
        - 5
        - -f
        - mpdecimate,rotate=PI/2
        - rawvideo
        - -pix_fmt
        - yuv420p

However, I get the following error message:

frigate | Error parsing config: expected a dictionary for dictionary value @ data['cameras']['fri_side_door']['ffmpeg']['output_args']

Any help/ ideas?

For reference here is my complete config file showing info for this camera:

cameras:
  fri_side_door:
    ffmpeg:
      inputs:
        - path: rtsp://viewer:[email protected]:554/user=admin_password=redacted_channel=1_stream=0
          roles:
            - detect
            - rtmp
            - clips
      output_args:
        - -r
        - 5
        - -f
        - mpdecimate,rotate=PI/2
        - rawvideo
        - -pix_fmt
        - yuv420p
    width: 1920
    height: 1080
    fps: 5

Many thanks for anything anyone can spot what I am doing wrong!

I’m running standalone Telegraf, Influx & Grafana dockers, and monitor all my dockers with that.

I’ve only just added the second camera to Frigate, so I guess it needs a little tuning. Got the same spikes with just 1 camera:


but half the CPU of 2 cameras:

1 Like

Pulling frigate’s stats into grafana will help tell you what frigate is doing. Look at the stats endpoint.

I have used the values from the original thread:

        - -vf
        - mpdecimate,rotate=PI/2
        - -f
        - rawvideo
        - -pix_fmt
        - rgb24

But my image isn’t showing as flipped…

Has the method to flip now changed?

Many thanks!

Presumably they’re just ffmpeg options?

Thanks for the pointers… I am getting the syntax wrong somewhere as getting this error:

Error parsing config: expected a dictionary for dictionary value @ data['cameras']['fri_side_door']['ffmpeg']['output_args']

Thanks

Look at the updated docs for how to specify output args. That example is based on an old version of frigate.

Ahhhh, this is the new format…

output_args:
  rtmp: -c:v libx264 -an -f flv

I add my values to this line and don’t need the other values. Do I replace what is there with the transpose option only?

Thanks for pointing me at this.

You can’t just replace the output args for any roles. They need to be modified or added to. Rotating like this for anything other than detect will likely result in a substantial increase in CPU load. I don’t think you can simply rotate a video feed without reencoding.

Thanks, I will take a look at this. I guess the ideal solution is to rotate within the camera software, unfortunatley the cams I have don’t have this as an option…

I’m not running Frigate on a pi, so have plenty of CPU to play with and am looking to replace Zoneminder, which is currently running on the same box… Even if things reencode I am hoping for a more efficient net result.

I will start with detect and see how I get on, will post results back to hopefully help anyone else and provide you feedback of my progress.

Thanks again.

I can offer broad feedback on zm to frigate migration.
I was running 6 cameras in zm with nvidia/cudda opencv. All working well cpu hovering at around 35% until there was object detection and it would jump to 70%

This is a ryzen 3900x with 64gb ram. bla, bla.

Whilst the zm eventnotification server says it can use the coral; i could never get it to work.

Migrated all to frigate. The coral off-course works out of the box.
CPU stays at around 15% always and inference is just a fraction of zm.

For face recognition tried the deepstack integration and got it to working acceptably, but then came across this double-take.

works really well with frigate and makes managing/training deepstack very easy. Additionally he just added self-hosted notifications via Gotify.

1 Like

That’s encouraging. When I can rotate 2 of my cams, I will then be able to switch off zoneminder, and enjoy the tight integration between Frigate and Homeassistant. As you say there are then the further options to join to other clever integrations - doubletake etc…

I should get time tonight to play with the output_args and hopefully get this thing nailed!

Thanks again.

Hey everybody, I’m using Frigate for a few weeks now with three Reolink cameras on HomeAssistant OS (HaOS). I purchased a Google Coral accelerator with USB but am not able to get this to work.

I’ve seen release notes of HaOS 6 that PCIe Coral devices are supported by now:

I upgrade to the most recent dev branch but still cannot get this to work :frowning: .
Can someone verify that Coral USB is also supported?

Thank you very much in advance!

Edit:

I’m using this in my frigate.yml:

detectors:
  coral:
    type: edgetpu
    device: usb

Once Frigate starts, it throws this exception:

frigate.edgetpu                INFO    : No EdgeTPU detected.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in __init__
    raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
    object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
  File "/opt/frigate/frigate/edgetpu.py", line 63, in __init__
    edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
  File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
    raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0

I added the following to my camera config for one camera:

output_args:
        record: -vf transpose=1 -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
        clips: -vf transpose=1 -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
        rtmp: -vf transpose=1 -c:v libx264 -an -f flv 

My system details:

H/W path       Device      Class       Description
==================================================
                           system      Computer
/0                         bus         Motherboard
/0/0                       memory      23GiB System memory
/0/1                       processor   Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz

And my cpu usage for ffmpeg did this:

  SHR S  %CPU %MEM     TIME+ COMMAND                                                            
58257 root      20   0 2528204 1.162g  16660 S 163.1  4.9   1:25.18 ffmpeg 

My processor goes to <150%!!

I will try dropping the transpose from some of the roles, perhaps just add it to clips may reduce things…