Local realtime person detection for RTSP cameras

That’s the sanitized version of course… The actual URL is the one that works fine in the command line, but I can’t send that to you because it has a password in it, among other things I don’t want to share.

Yes I am running the latest versions of everything, completely fresh install of ubuntu and downloaded the latest direct from canonical. I will go through and redo all the indents just to be sure, but I suspect the first example is bad syntax, and in the second example, frigate has a problem parsing the option it’s given.

It does look like it is getting an extra quotation mark. Did you try not using quotes in your second example?

1 Like

Thank you for sharing the code, I was trying this and am stuck with the error

 Traceback (most recent call last):
frigate        |   File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
frigate        |     self.run()
frigate        |   File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
frigate        |     self._target(*self._args, **self._kwargs)
frigate        |   File "/opt/frigate/frigate/motion.py", line 34, in detect_motion
frigate        |     gray[mask] = [255]
frigate        | IndexError: index 400 is out of bounds for axis 1 with size 400

this is my docker-compose
REGIONS: "400,350,250,2000,200,camera1_mask.bmp"
I am not getting anything on the mqtt channel but I can access the webpage.
Can you please have a look.

Is your camera1_mask.bmp exactly 400px by 400px?

1 Like

@blakeblackshear

This looks very useful, especially the upcoming features (output movie clips of people for notifications).
I’ve pulled and using your default docker-compose file with the regions you used in your example above.

I’m getting the following error (running Ubuntu 18.04). Not entirely sure there is something additional I’ll need to install for my machine?

frigate | 2019-03-16 13:00:48.274560: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.

How old is your CPU @jacob_gibbs ?

See post 47

1 Like

Doh, thought I’d read every post in here…

Thank you, container is now running!

Glad to help.

Need to get round to giving this ago myself, looking forward to seeing how the Google Coral support turns out!

This is a cool project. I have 3 camera already configured in zoneminder. Only too of them allow multi rtsp connection. Here is what I had to do to get it to run.

change the Dockerfile line for pip install of tensorflow to
from
tensorflow
to
tensorflow==1.5.0
my process does not have avx
I am getting Invalid UE golomb code in docker but its running and I can see the stream in the URL.

When I walk outside it picks up the person detect quickly. and I see it in the URL/best_person.jpg. and it sends the mqtt message about motion and person detect.

my stream size is 640 x 480 so your mask image fit well but not perfect. I don’t know how to create my own bmp mask. I will have to figure that out if you want to give me some hints. :slight_smile:

No it wasn’t, after fixing the size everything working very well, thank you.

I wonder if this could be used with Blue Iris?

1 Like

Made good progress on support for Google Coral today. I have inference offloaded to the device, but it only supports a single region at the moment because it can only be accessed by a single process at a time. I need to change the architecture to have a single detection process for all regions.

1 Like

Good news on the progress.

How noticeable is it on speed improvements or is it too early to judge due to you having to change the architecture around?

Do you think you will be able to get it to work on multiple camera feeds or would you effectively require a Coral per Camera? I guess the dev tools are still quite early at the moment as well, so maybe able to multi thread in the future?

Not sure what type of speed improvements you are referring to. The current version detects objects within 1 second on my machine. The short delay is only partially due to object detection time. It also takes time to decode the video feed, resize the region, etc. That is all still true. The main difference is amount of CPU usage it takes to do object detection. With the Coral, all of that is offloaded which results in a substantial reduction in CPU usage.

The specs say it can process 100+ FPS, and I am certain I can get it to support multiple cameras/regions, it will just take some time. I really only get to work on this a little bit here and there between chasing the kids around and my day job.

1 Like

I doubt you could get it to do much usefully on a rpi, it takes quite a lot of resources since it’s processing the whole stream. You’d be better off using the standard tensorflow addon which only processes individual frames from the stream at a time and which you can tell to only do one every 10 seconds or something… But you do have the option to dedicate a more powerful machine to this and leave all of your hass stuff on the pi… Or use coral, which seems like it’d be a very nice solution.

Coral support is very interesting. Tell me more about how you are using this, do you have the USB accelerator attached to a PC or do you have a dev board and have everything running on that? When using coral, do you have to have it do all the processing or can you split the processing between CPU and coral? Can you use multiple USB accelerators if you max one out?

Okay, so docker compose is now working, thanks for the suggestion to remove the quotes, that did it… You may want to update your sample docker compose in the documentation to something like the below… This is the exact syntax that worked for me if anyone else wants to try it.

version: "3"
services:

  frigate1:
    container_name: tensorflow_right
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
      - /root/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb:/frozen_inference_graph.pb:ro
    ports:
      - 5000:5000
    environment:
      - RTSP_URL=rtsp://username:[email protected]:554/Streaming/channels/1
      - REGIONS=720,0,0,4000,1000,720RightTopMask.bmp:720,0,559,4000,1000,720RightBottomMask.bmp
      - MQTT_HOST=192.168.1.10
      - MQTT_TOPIC_PREFIX=cameras/1
      - DEBUG=0

  frigate2:
    container_name: tensorflow_left
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
      - /root/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb:/frozen_inference_graph.pb:ro
    ports:
      - 5001:5000
    environment:
      - RTSP_URL=rtsp://username:[email protected]:554/Streaming/channels/1
      - REGIONS=720,0,0,4000,1000,720back_left.bmp:720,0,559,4000,1000,720mask.bmp
      - MQTT_HOST=192.168.1.10
      - MQTT_TOPIC_PREFIX=cameras/2
      - DEBUG= 0



  frigate3:
    container_name: tensorflow_front
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
      - /root/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb:/frozen_inference_graph.pb:ro
    ports:
      - 5002:5000
    environment:
      - RTSP_URL=rtsp://username:[email protected]:554/Streaming/channels/1
      - REGIONS=720,0,0,4000,1000,720FrontLeft.bmp:720,559,0,4000,1000,720mask.bmp
      - MQTT_HOST=192.168.1.10
      - MQTT_TOPIC_PREFIX=cameras/3
      - DEBUG=0


  frigate4:
    container_name: tensorflow_back
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
      - /root/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb:/frozen_inference_graph.pb:ro
    ports:
      - 5003:5000
    environment:
      - RTSP_URL=rtsp://username:[email protected]:554/Streaming/channels/1
      - REGIONS=720,0,0,4000,1000,720back_left.bmp:720,559,0,4000,1000,720mask.bmp
      - MQTT_HOST=192.168.1.10
      - MQTT_TOPIC_PREFIX=cameras/4
      - DEBUG=0
2 Likes

Coral USB connected to PC. It is a specialized processor just for running tensorflow inference on models specifically optimized for it. It is insanely efficient in comparison to a CPU, but it only does the object detection. Everything else is still done on the CPU, decoding the stream, motion detection, resizing the image for processing, etc. I could split the object detection across the TPU and CPU, but it would be better to add a second Coral if that is possible. I will probably buy a second one eventually to try it out.

1 Like

Yeah, the price/performance of those things is great. Beats running up all the cores on an i7 by a longshot. I will pick one up for sure once you get it going on multiple regions.