Local realtime person detection for RTSP cameras

I have built the Docker container of Frigate on a Raspberry Pi 4 with the Google Coral USB Accelerator and when I start it like this:

docker run --privileged -v /dev/bus/usb:/dev/bus/usb -v /home/pi/frigate_config:/config:ro -p 5000:5000 frigate:latest

I get the following error message:

On connect called
[rtsp @ 0x1bc6330] method SETUP failed: 461 Client error
Traceback (most recent call last):
  File "detect_objects.py", line 99, in <module>
    main()
  File "detect_objects.py", line 53, in main
    cameras[name] = Camera(name, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
  File "/opt/frigate/frigate/video.py", line 126, in __init__
    self.frame_shape = get_frame_shape(self.rtsp_url)
  File "/opt/frigate/frigate/video.py", line 54, in get_frame_shape
    frame_shape = frame.shape
AttributeError: 'NoneType' object has no attribute 'shape'

The RTSP camera is a Raspberry Pi Camera Module streamed with VLC:

raspivid -vf -o - -t 0 -w 640 -h 480 -fps 25 -b 250000 | cvlc -vvv stream:///dev/stdin --sout-rtsp-user=demo --sout-rtsp-pwd=demo --sout '#rtp{access=udp,sdp=rtsp://:8554/stream}' :demux=h264

And I can see the camera image with a VLC client on my laptop:

vlc -v rtsp://demo:[email protected]:8554/stream

My config.yml:


web_port: 5000

mqtt:
  host: 192.168.0.63 
  topic_prefix: frigate

cameras:
  back:
    rtsp:
      user: demo 
      host: 192.168.0.181
      port: 8554
      password: demo 
      path: /stream

    regions:
      - size: 300
        x_offset: 0
        y_offset: 0
        min_person_area: 5000
        threshold: 0.5

The communication with the MQTT broker is also working:

$ mosquitto_sub -h 192.168.0.63 -v -t 'frigate/#'
frigate/available online
frigate/available offline

This all looks OK to me, and I’m able to see the RTSP camera stream in VLC, so I’m a bit puzzled what Frigate is complaining about. Did I miss something?

1 Like

method SETUP failed: 461 Client error. I haven’t seen that error. Try getting the stream to play in ffplay.

ffplay rtsp://demo:[email protected]:8554/stream just plays it without complaining.

Where in the Docker container can I find some logs?

If this still shows when you run lsusb it won’t work regardless of your docker issues. I wanted to run frigate on a higher powered windows cpu to see if I got any performance improvements over a lower powered linux cpu I use for everything else so had to figure out the VM set-up. The Coral showing up as Global Unichip was the biggest challenge.

The key is getting the Global Unichip IDs changed to:
Vendor ID: 18d1
Product ID: 9302

I used this link as a guide. Really starting on #4 is how you do it. Coral EdgeTPU USB Accelerator with VirtualBox - DEV Community

All the logs just go to stdout, so you are seeing them. By default ffmpeg is configured to minimize logging. You can remove the ffmpeg global params in video.py to get more logs from ffmpeg.

Ok, I tried another camera, and this time I got no error messages, and I got an image with a region box on http://192.168.0.181:5000/back, but no person detection.

And now after a restart of the Docker container I see the person detected on the image and:

$ mosquitto_sub -h 192.168.0.63 -F '%I : %t : %p' -t 'frigate/#'
2019-08-15T15:16:17+0200 : frigate/available : online
2019-08-15T15:16:24+0200 : frigate/back/objects : {"person": "OFF"}
2019-08-15T15:16:24+0200 : frigate/back/objects : {"person": "ON"}

But when the person gets out of the view, the MQTT message doesn’t go to OFF. Moreover, the video on http://192.168.0.181:5000/back is frozen. It seems like the object detection crashed after detecting the person? Meanwhile, no error messages on stdout.

I’ll try some debugging, but I have another question that will probably save me some time: could you give some examples of cameras that are known to work with Frigate? Which models do you use to test Frigate?

Post your regions from your config and the resolution of your video stream. If your region is outside the bounds of your image, one of the threads will crash.

That doesn’t seem to be the problem: the video stream is 480x640 and the regions are:

    regions:
      - size: 200
        x_offset: 140
        y_offset: 340
        min_person_area: 5000
        threshold: 0.5

Thanks. I went and bought another computer. Im installing linux now, this better work now lol:)

ITS WORKING!!! LOL, pretty sweet actually Im running it on one camera right now and cpu stays around 8%. Ill be able to add a couple more.

Now I need to setup my regions, I just read back through alot of the thread…not quite all 500 but I understand 0,0 would be top left and you want to make the box as large as possible to make it easier to detect a whole person. I have two questions i couldnt find an answer too, first one im sure im just missing something, is there a way to see what your regions are set too? so i make a change, how can I verify the box is where i really want it? and then the mask file, it soundes like i need to make an image that blacks out everything but my box area?..and i lied, third question…min_person_area says its the smallest box it will make, is 5000 standard and should just leave that be?

thank you very much for creating this, sharing this, and taking the time to help so many people!!!

is it possible to cars? that would be huge, both calling out cars in the driveway but also knowing if my car is in the garage or not…reminders at night to park it or alert if outside and about to rain…

Congrats!

  1. One way you can do it is take a copy of the image from the live stream, paste it into a graphics program (e.g. Gimp), and use the selection tool or equivalent to draw out the optimal box size and it should display the coordinates etc

  2. The mask is optional just incase you didn’t know. I’ve not used it

  3. Easiest way I believe is to open the live stream on something portable and walk into the camera view. The detected object size is printed on the boundary box in real time. You might want to lower it initially.

Live view just incase you had not seen it http://host:5000/<camera_name>

Thanks man, i couldnt have got through this without the help! Purely due to my lack of skills:) But I did learn quite a bit. Ive now got 3 cameras setup, regions and all! Seems very stable, very fast, and very accurate…by far the best recognition ive ever played with! Going to start setting some automations up now, thanks again I wish i could buy you guys a beer or coffee or steak lol.

Hey quick question, i apologize if this isnt the correct spot but i want to make sure i dont break this:)

There is another docker project that detects cars using the coral, is it possible to have 2 docker containers both using the same coral? Im going to check if you can use two corals on the same machine in case.

thanks again guys!

It is not possible to share one coral. There is already a feature request to add cars to frigate. I may be able to add it in the next release.

1 Like

Oh ok, thank you for the quick response. Ill just hang tight…id offer my help but i know id just get in your way…if theres i can do though please let me know.

Gotta say thank you again. This is solid I’ve had it running this whole time with 3 cameras and I’ve turned all my motion alerts off from sensors doorbells nvr and just have homeassistant notify my cell a person alert with best picture. Works flawless and damn near instant. I get that within 2 seconds every time even at night any camera. Thanks again for sharing dude your my hero.

1 Like

+1 for face detection. It’d be awesome to use frigate to detect a person and face and then pass on that image to another Docker container for face recognition.

I’m dreaming of automations that could send a notification saying “Bill and Sally are at your front door.” in just a couple of seconds!

Thanks for all your hard work @blakeblackshear.

Any chance this works with jetson nano also?

I saw this a few days ago re jetson nano -