Local realtime person detection for RTSP cameras

Thank you! Will try upping this a bit (currently set to .5)

no problem. yea I changed mine also. I wasnt getting many false alarms though, just my front yard I have a cement statue in my landscaping and the cam over there always alerted on it. I acutally leaned a piece of wood againt it., (behind it so no one sees it) but it solved that issue. Other than that I got em on .7.

I also sometimes use the mask to prevent certain static objects from being detected.

cameras:
  driveway:
    rtsp:
      user: <camera/nvr user name>
      host: <camera/nvr ip address>
      port: 554
      password: $RTSP_PASSWORD
      path: /h264Preview_07_sub
1 Like

Apologies if I missed this, but is there a HA config example of how to use the best person image via MQTT? I am currently using a CURL command to grab the latest image from the best person endpoint, but would like to switch to the latest MQTT image setup. Thanks!

For notifications, I still pull from the best person endpoint directly when the person sensor triggers. That is guaranteed to be the latest/best image. See the Readme example. I am sure there is an endpoint to pull the latest image from a camera entity, but I don’t know it off hand and it could be out of date.

Could someone please share their frigate config.yml file for Reolink cameras? The feed I use is visible in HomeAssistant and VLC but i cant for the life of me get the Frigate generated camera to work.

Hi calypso, I use a reolink camera and apply the following for the ‘path’ format to get the smaller substream,

path: /h264Preview_01_sub

Full context can be found here.

1 Like

I need to play with that. I knew there had to be a way.

I do the same. I’ve been thinking it would be nice to save the last 5 or 10 images. Couple times I’ve been in a meeting or didn’t see my phone and I’ll have a few notifications. Do you think this is something that could be added ? Or even worth the time? I was thinking of creating a small script that just moved the file and added a date time but I haven’t messed with it just yet.

1 Like

I want that too! I’d like to have something like the swiper card in my UI with the last 5-10 images I can browse through to see what was seen. If we could have multiple cameras for “best_person” “best_person_-1”, “best_person_-2”, etc, that would make it possible to do… An option for how many previous images to serve would be good too.

EDIT: I opened an issue before I forgot.

2 Likes

Has anybody had any luck using nvidia decoding/encoding acceleration? I’ve tried a bunch of different ffmpeg options for -hwaccel cuvid, but all combinations failed with different errors. It looks like the process tries to use some software only filter which obviously is not compatible with hardware acceleration. I’m not very familiar with ffmpeg so I might be easily missing something very simple.

I suspect you may have to build the docker container rather than just cloning it. I also can’t get it to work because I don’t want to install everything needed to build it on my synology, but I suspect that’s why.

Nope, the container itself is fine, I use nvidia-toolkit to allow my containers access to the video card. From my very limited understanding of ffmpeg, I believe the problem here is cuvid not supporting the rgb24 pixel format. I can make it work with different pixel formats and I see two processes (decode/encode) using my video card in this case, but I see black screen in my browser plus it looks like the CV algorithm also expects rgb24 format to work properly so it’s complaining about not seeing frames coming in, even though ffmpeg process reports 150 frames decoded/encoded (10 seconds watchdog * fps) each watchdog cycle.

I would focus on tweaking the params with ffmpeg directly in the container before trying with frigate. I have also had issues with ARM because it doesn’t support hardware accelerated conversion from yuv420p to rgb24. One thing I have considered is doing that conversion in python rather than ffmpeg, but it seems fairly complex. I think it would be slower for Intel-based hwaccel though.

Thanks for chiming in! The question is, does it have to be rgb24? Does tensorflow/opencv support bgr0 for instance? I also saw a bunch of utility functions in the opencv code converting between different formats, can those be utilized here or would it defeat the purpose of hardware acceleration?

Currently, all the models I have seen are trained on rgb24. I guess you could train a model on other pixel formats, but no guarantee that they would work the same. Having a model trained on the native output format would be the best from a performance standpoint.

1 Like

Just a tip for other unraid users and going on what @cjackson234 has already said. When adding to unraid dont use a docker pull in the terminal. Rather start a fresh container and then add the fields as per post 152.
All working perfectly and easy to integrate into home assistant from there.

1 Like

I’ve been trying to crack this project for the past couple days and I’m starting to wonder if I’m just a little slow. Could someone explain how the docker-compose.yml file is created and why I see it referenced in the opt/frigate folder in other people’s posts?

I get that there is an example of what the docker-compose.yml file is suppose to contain although I haven’t seen anything talking about how it should be created or where it needs to be saved.

I’m currently waiting for the docker build -t frigate . command to finish doing its thing. Which is the second attempt. I’m guessing because I didn’t have docker installed the first run through, is why it didn’t work.

Is there a basic guide for putting all of the pieces into place or do most people just know how to work with linux and docker?

Edit:
So rerunning the docker build command seems to not have helped. I’m running Ubuntu 19.04 on an HP Stream 11 laptop (Intel Celeron N2840). Here is the output I’m getting:
Successfully built dfc07e44874e
Successfully tagged frigate:latest
root@BlueLaptop:~/Downloads/frigate-master# sudo docker run --rm --privileged -v /dev/bus/usb:/dev/bus/usb -v /home/andy/Downloads/frigate-master/config:/config:ro -p 5000:5000 -e RTSP_PASSWORD=‘password’ frigate:latest
Traceback (most recent call last):
File “detect_objects.py”, line 99, in
main()
File “detect_objects.py”, line 44, in main
client.connect(MQTT_HOST, MQTT_PORT, 60)
File “/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py”, line 839, in connect
return self.reconnect()
File “/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py”, line 962, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
File “/usr/lib/python3.6/socket.py”, line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File “/usr/lib/python3.6/socket.py”, line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

I’m still having trouble getting my frigate container to run. I was however able to run the benchmark which returned an average inference inference time of 12.224.

I thought the “Errno -2” issue might be due to not having Hass installed and running but I was able to get it installed and running without too much trouble.

I reinstalled the docker engine and docker compose. Although it doesn’t seem to have made a difference.

Any ideas on what I’m missing or what I can do to figure it out?

Seems like your MQTT hostname is incorrect, or the dns lookup is failing from the container. Do you have a server configured?