Local realtime person detection for RTSP cameras

Thank you freshcoast. You were right. I needed to update the MQTT address in my frigate config file.

Thank you @blakeblackshear for putting this together. Here is my best shot at a noobā€™s guide to using a Raspberry Pi 4 to do person detection. Iā€™m still working on home assistant integration but this doesnā€™t seem very difficult. (Edit: It was harder than I expected although now that I have it setup, itā€™s pretty awesome.)

Step 1: Flash Hypriot image onto SD card using balenaEtcher
Step 2: Update Password:

passwd

Step 3: root privilege

sudo -i

Step 4: update your install

apt-get update && apt-get upgrade -y
reboot

Step 6: Install extra software for hassio

apt-get install -y jq curl avahi-daemon dbus apparmor-utils network-manager apt-transport-https ca-certificates socat software-properties-common gnupg-agent
reboot

Step 7: install hass.io for RPi4

curl -sL https://raw.githubusercontent.com/home-assistant/hassio-installer/master/hassio_install.sh | bash -s -- -m raspberrypi4

Step 8: check Hassio install at 192.168.xx.xx:8123
Step 9: Setup Home Assistant
Step 10: Install Mosquitto from add ons and add username and password
Step 11: Install Configurator from add ons
Step 12: Download frigate files to /home/pirate

git clone https://github.com/blakeblackshear/frigate.git

Step 13: Update config file
Step 14: Comment out intel hardware acceleration lines in DockerFile so it looks like this

 libgcc1 
 # VAAPI drivers for Intel hardware accel
# libva-drm2 libva2 i965-va-driver vainfo \
# && rm -rf /var/lib/apt/lists/* 

Step 15: Build Frigate container

docker build -t frigate .

Step 16: Create docker-compose.yml file and save it in main frigate directory. Like so:

nano docker-compose.yml

copy and paste the following.

version: "3"
services:

  frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
    image: frigate:latest
    volumes:
      - /dev/bus/usb:/dev/bus/usb
      - /home/pirate/frigate/config:/config
    ports:
      - "5000:5000"
    environment:
      RTSP_PASSWORD: "password"

Then control+o, enter, control+x
Step 17: Run frigate container

docker-compose up

Step 18: Integrate with Home Assistant by copying the camera and binary_sensor code from @blakeblackshearā€™s readme example to the Home Assistant config file. As of this writing it appears the automation code is intended to live inside the automations.yaml file which is where it was generated when I created my first automation via the Configuration->Automation tab.
For those coming from the ipcamtalk world like me, here are a few tips on Home Assistant:

  1. The configuration.yaml file that is created by default on the most recent version of HA wonā€™t look the same as most of the examples youā€™ll find. It doesnā€™t have or need the extra lines of code to work properly. Youā€™ll end up needing to edit the file a lot less than most of the examples and videos youā€™ll find.
  2. If you want to add multiple cameras youā€™ll just copy the ā€œcameraā€ and ā€œbinary_sensorā€ sections and add a space and an identifier like so
camera 2:
  - name: Camera Last Person 2
    platform: mqtt
    topic: frigate/<camera2_name>/snapshot

binary_sensor 2:
  - name: Camera2 Person
    platform: mqtt
    state_topic: "frigate/<camera2_name>/objects"
    value_template: '{{ value_json.person }}'
    device_class: motion
    availability_topic: "frigate/available"
  1. For Android users you wonā€™t need a ā€œnativeā€ app from the Play store. Itā€™s possible to setup your configuration with duckdns.org via a Hass.io add-on so that you can view your HA install remotely. Once setup you can ā€œadd to home screenā€ the https page you have created and itā€™ll function just like a normal app would. The lovelace UI is very good from what Iā€™ve seen and is working better than my attempt at setting up ā€œHA Clientā€. For notifications Iā€™m using html5 although, telegram and pushbullet are apparently good options as well.
4 Likes

One thing Iā€™d really like to do is have the ability to create an automation that is triggered by an individual bounding box. For instance if you have two bounding boxes within a single camera view, one covering the front door and another box covering the driveway, it would be nice to be able to trigger off of the front door box (deliveries during the day) separately from the driveway box (prowler at night). I might be able to figure this out but if someone knows how already please let me know.

The other thing Iā€™m trying to figure out is how to create an automation that is triggered only if a person is detected in a box on one camera shortly before a person is detected in a box on another camera. For instance if camera1 covers the outside of a fence and camera2 covers the inside of the fence, trigger an action if a person is detected on camera1 shortly before a person is detected on camera2 (someone hops over the fence). Any ideas?

3 Likes

Thank for that, this has been very useful. I managed to get the object detection working :slight_smile:

And thanks to @blakeblackshear for developing and sharing such a great piece of software! :slight_smile:

Hey guys, I set this up few weeks back and it ran solid for the whole time. Well I rebooted everything and completely forgot how I did this lol. Literally been trying for days to get it back. I finally have it so my pictures are getting to ha and alerts working again. But strange part is I canā€™t get to the url to setup the regions. I donā€™t see it mentioned in the docs it changed but at the point I gotta askā€¦ Thx

I realize the pictures are coming over mqtt now too, but i thought the url woud still work to adjust the regions? im using http://ipaddress:5000/camname

The regions are adjusted in your frigate config file which is usually located in the /home/ā€œusernameā€/frigate/config folder.

1 Like

Thanks. I am looking for the url that shows the video with the region boxes, so I can make sure my regions are where they need to be.

My fault. just noticed i forgot to expose the port in my compose file. added the port and back in buisness. thanks

Hello,
Thank you @blakeblackshear for a really interesting project and a useful addition to my home automation.
I followed @polarbearshirt instructions and got things up and runningā€¦ of sorts.
The image at port 5000 is very jerky and delayed by about 30 seconds. Cars going past can jump from one side of the screen to the other and people are missed as they ā€˜jumpā€™ over the detection areas.
Iā€™m using a Raspberry Pi 4 and a new security camera, I just bought for this project, that is outputting H265 video at 1920x1080. I tried changing the take_frame to 5 but this didnā€™t make an improvement.
Any ideas where I could be going wrong? Maybe H265 is not supported.
Regards
Mark

what FPS are you running the camera at? Honestly, I havent tried H265 because I had issues capturing clips from my cameras then sending them over Telegram.

Thanks for the reply. To be honest I havenā€™t been able to work out what the FPS are from the camera. I looked in VLC but this is not one of the fields shown in ā€˜Media Informationā€™. But, it does look very smooth to me so I assume quite high. Which is why I changed the ā€˜take_frameā€™ value.

Curious, is the camera and pi4 on wifi or wired ethernet? Iā€™d try, just for kicks, bumping down to h264 if you can.

Itā€™s all on wired. I canā€™t see how to change the output to H264 (the camera is a Jennov A73WG20) but your suggestion gave me an idea of using some of the other output streams. Stream 2 is still H265 apparently , but the resolution is 640x480 which seems to not suffer the same stuttering. Iā€™ll leave it running to see if it identifies people as before.

and the lag is now minimal too. A big step forward, thank you!

1 Like

Hey I was looking through the GitHub and noticed someone mentioned you can put a different model to be able to detect cars. Is there some steps to what I would need to do to make that happen? Thank you!

Models trained on the ImageNet database could potentially recognize more categories of cars like trucks, as they are differentiated in the DBā€¦ Best would probably be something like the MobileNet V2 with Imagenet from here:

Though you should be able to recognize a generic ā€œcarā€ category with the included model. You will just have to set the configuration to send recognition of the additional category in the Frigate configuration.

Thanks man! That last sentence sounds easy lol. Where do you make the change from person to car?

I havenā€™t done it myself, but I recall seeing other people doing it either in this thread or in the github chat. IIRC, it was done in the parameters that can be sent to the detection engine in recent versions. You can find the details if you look for them.

Ok thanks for the direction. Iā€™ve been through the thread and gitHub. I donā€™t have chat or anything but Iā€™ll see if I can find something similar n copy. Iā€™ll post back if I do. Thanks again.