Local realtime person detection for RTSP cameras

Apologies if I missed this, but is there a HA config example of how to use the best person image via MQTT? I am currently using a CURL command to grab the latest image from the best person endpoint, but would like to switch to the latest MQTT image setup. Thanks!

For notifications, I still pull from the best person endpoint directly when the person sensor triggers. That is guaranteed to be the latest/best image. See the Readme example. I am sure there is an endpoint to pull the latest image from a camera entity, but I don’t know it off hand and it could be out of date.

Could someone please share their frigate config.yml file for Reolink cameras? The feed I use is visible in HomeAssistant and VLC but i cant for the life of me get the Frigate generated camera to work.

Hi calypso, I use a reolink camera and apply the following for the ‘path’ format to get the smaller substream,

path: /h264Preview_01_sub

Full context can be found here.

1 Like

I need to play with that. I knew there had to be a way.

I do the same. I’ve been thinking it would be nice to save the last 5 or 10 images. Couple times I’ve been in a meeting or didn’t see my phone and I’ll have a few notifications. Do you think this is something that could be added ? Or even worth the time? I was thinking of creating a small script that just moved the file and added a date time but I haven’t messed with it just yet.

1 Like

I want that too! I’d like to have something like the swiper card in my UI with the last 5-10 images I can browse through to see what was seen. If we could have multiple cameras for “best_person” “best_person_-1”, “best_person_-2”, etc, that would make it possible to do… An option for how many previous images to serve would be good too.

EDIT: I opened an issue before I forgot.

2 Likes

Has anybody had any luck using nvidia decoding/encoding acceleration? I’ve tried a bunch of different ffmpeg options for -hwaccel cuvid, but all combinations failed with different errors. It looks like the process tries to use some software only filter which obviously is not compatible with hardware acceleration. I’m not very familiar with ffmpeg so I might be easily missing something very simple.

I suspect you may have to build the docker container rather than just cloning it. I also can’t get it to work because I don’t want to install everything needed to build it on my synology, but I suspect that’s why.

Nope, the container itself is fine, I use nvidia-toolkit to allow my containers access to the video card. From my very limited understanding of ffmpeg, I believe the problem here is cuvid not supporting the rgb24 pixel format. I can make it work with different pixel formats and I see two processes (decode/encode) using my video card in this case, but I see black screen in my browser plus it looks like the CV algorithm also expects rgb24 format to work properly so it’s complaining about not seeing frames coming in, even though ffmpeg process reports 150 frames decoded/encoded (10 seconds watchdog * fps) each watchdog cycle.

I would focus on tweaking the params with ffmpeg directly in the container before trying with frigate. I have also had issues with ARM because it doesn’t support hardware accelerated conversion from yuv420p to rgb24. One thing I have considered is doing that conversion in python rather than ffmpeg, but it seems fairly complex. I think it would be slower for Intel-based hwaccel though.

Thanks for chiming in! The question is, does it have to be rgb24? Does tensorflow/opencv support bgr0 for instance? I also saw a bunch of utility functions in the opencv code converting between different formats, can those be utilized here or would it defeat the purpose of hardware acceleration?

Currently, all the models I have seen are trained on rgb24. I guess you could train a model on other pixel formats, but no guarantee that they would work the same. Having a model trained on the native output format would be the best from a performance standpoint.

1 Like

Just a tip for other unraid users and going on what @cjackson234 has already said. When adding to unraid dont use a docker pull in the terminal. Rather start a fresh container and then add the fields as per post 152.
All working perfectly and easy to integrate into home assistant from there.

1 Like

I’ve been trying to crack this project for the past couple days and I’m starting to wonder if I’m just a little slow. Could someone explain how the docker-compose.yml file is created and why I see it referenced in the opt/frigate folder in other people’s posts?

I get that there is an example of what the docker-compose.yml file is suppose to contain although I haven’t seen anything talking about how it should be created or where it needs to be saved.

I’m currently waiting for the docker build -t frigate . command to finish doing its thing. Which is the second attempt. I’m guessing because I didn’t have docker installed the first run through, is why it didn’t work.

Is there a basic guide for putting all of the pieces into place or do most people just know how to work with linux and docker?

Edit:
So rerunning the docker build command seems to not have helped. I’m running Ubuntu 19.04 on an HP Stream 11 laptop (Intel Celeron N2840). Here is the output I’m getting:
Successfully built dfc07e44874e
Successfully tagged frigate:latest
root@BlueLaptop:~/Downloads/frigate-master# sudo docker run --rm --privileged -v /dev/bus/usb:/dev/bus/usb -v /home/andy/Downloads/frigate-master/config:/config:ro -p 5000:5000 -e RTSP_PASSWORD=‘password’ frigate:latest
Traceback (most recent call last):
File “detect_objects.py”, line 99, in
main()
File “detect_objects.py”, line 44, in main
client.connect(MQTT_HOST, MQTT_PORT, 60)
File “/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py”, line 839, in connect
return self.reconnect()
File “/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py”, line 962, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
File “/usr/lib/python3.6/socket.py”, line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File “/usr/lib/python3.6/socket.py”, line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

I’m still having trouble getting my frigate container to run. I was however able to run the benchmark which returned an average inference inference time of 12.224.

I thought the “Errno -2” issue might be due to not having Hass installed and running but I was able to get it installed and running without too much trouble.

I reinstalled the docker engine and docker compose. Although it doesn’t seem to have made a difference.

Any ideas on what I’m missing or what I can do to figure it out?

Seems like your MQTT hostname is incorrect, or the dns lookup is failing from the container. Do you have a server configured?

Thank you freshcoast. You were right. I needed to update the MQTT address in my frigate config file.

Thank you @blakeblackshear for putting this together. Here is my best shot at a noob’s guide to using a Raspberry Pi 4 to do person detection. I’m still working on home assistant integration but this doesn’t seem very difficult. (Edit: It was harder than I expected although now that I have it setup, it’s pretty awesome.)

Step 1: Flash Hypriot image onto SD card using balenaEtcher
Step 2: Update Password:

passwd

Step 3: root privilege

sudo -i

Step 4: update your install

apt-get update && apt-get upgrade -y
reboot

Step 6: Install extra software for hassio

apt-get install -y jq curl avahi-daemon dbus apparmor-utils network-manager apt-transport-https ca-certificates socat software-properties-common gnupg-agent
reboot

Step 7: install hass.io for RPi4

curl -sL https://raw.githubusercontent.com/home-assistant/hassio-installer/master/hassio_install.sh | bash -s -- -m raspberrypi4

Step 8: check Hassio install at 192.168.xx.xx:8123
Step 9: Setup Home Assistant
Step 10: Install Mosquitto from add ons and add username and password
Step 11: Install Configurator from add ons
Step 12: Download frigate files to /home/pirate

git clone https://github.com/blakeblackshear/frigate.git

Step 13: Update config file
Step 14: Comment out intel hardware acceleration lines in DockerFile so it looks like this

 libgcc1 
 # VAAPI drivers for Intel hardware accel
# libva-drm2 libva2 i965-va-driver vainfo \
# && rm -rf /var/lib/apt/lists/* 

Step 15: Build Frigate container

docker build -t frigate .

Step 16: Create docker-compose.yml file and save it in main frigate directory. Like so:

nano docker-compose.yml

copy and paste the following.

version: "3"
services:

  frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
    image: frigate:latest
    volumes:
      - /dev/bus/usb:/dev/bus/usb
      - /home/pirate/frigate/config:/config
    ports:
      - "5000:5000"
    environment:
      RTSP_PASSWORD: "password"

Then control+o, enter, control+x
Step 17: Run frigate container

docker-compose up

Step 18: Integrate with Home Assistant by copying the camera and binary_sensor code from @blakeblackshear’s readme example to the Home Assistant config file. As of this writing it appears the automation code is intended to live inside the automations.yaml file which is where it was generated when I created my first automation via the Configuration->Automation tab.
For those coming from the ipcamtalk world like me, here are a few tips on Home Assistant:

  1. The configuration.yaml file that is created by default on the most recent version of HA won’t look the same as most of the examples you’ll find. It doesn’t have or need the extra lines of code to work properly. You’ll end up needing to edit the file a lot less than most of the examples and videos you’ll find.
  2. If you want to add multiple cameras you’ll just copy the “camera” and “binary_sensor” sections and add a space and an identifier like so
camera 2:
  - name: Camera Last Person 2
    platform: mqtt
    topic: frigate/<camera2_name>/snapshot

binary_sensor 2:
  - name: Camera2 Person
    platform: mqtt
    state_topic: "frigate/<camera2_name>/objects"
    value_template: '{{ value_json.person }}'
    device_class: motion
    availability_topic: "frigate/available"
  1. For Android users you won’t need a “native” app from the Play store. It’s possible to setup your configuration with duckdns.org via a Hass.io add-on so that you can view your HA install remotely. Once setup you can “add to home screen” the https page you have created and it’ll function just like a normal app would. The lovelace UI is very good from what I’ve seen and is working better than my attempt at setting up “HA Client”. For notifications I’m using html5 although, telegram and pushbullet are apparently good options as well.
4 Likes

One thing I’d really like to do is have the ability to create an automation that is triggered by an individual bounding box. For instance if you have two bounding boxes within a single camera view, one covering the front door and another box covering the driveway, it would be nice to be able to trigger off of the front door box (deliveries during the day) separately from the driveway box (prowler at night). I might be able to figure this out but if someone knows how already please let me know.

The other thing I’m trying to figure out is how to create an automation that is triggered only if a person is detected in a box on one camera shortly before a person is detected in a box on another camera. For instance if camera1 covers the outside of a fence and camera2 covers the inside of the fence, trigger an action if a person is detected on camera1 shortly before a person is detected on camera2 (someone hops over the fence). Any ideas?

3 Likes

Thank for that, this has been very useful. I managed to get the object detection working :slight_smile:

And thanks to @blakeblackshear for developing and sharing such a great piece of software! :slight_smile: