Local realtime person detection for RTSP cameras

This is off to a great start, thank you for creating and sharing. I had two quick questions.

  1. The person score sensor/mqtt values and the bounding boxes of the images. I usually see scores in the 90s for the bounding box but the person score is usually lower. Is this from the /5 calculation that occurs? Is it the sum of detected objects of all objects or just person objects?

  2. The NUM_CLASSES is set to 90 and the mapping files I was using have 90 but frigate only reports on person. I built a new container with NUM_CLASSES set to 1 and trimmed down the mapping file to just person but person dection stopped working. The goal was to see if this reduced cpu load. (Iā€™m very new to tensorflow/ML so Iā€™m fumbling around a bit) Should this work? I saw a comment about making this dynamic so perhaps you have ideas in the works. FWIW I think the tensforflow component lets us select objects types.

thanks again for your time

  1. Sum of only person scores in the past second divided by 5.
  2. That is the number of classes in the model. In order to reduce the number of objects you would have to retrain or use a different model. However, from my understanding, the additional objects improve the accuracy of person detection because it has learned what isnā€™t a person. It is in fact looking for all 90 object types. I am just only reporting on person. Adding an objects parameter would be fairly easy to do.
1 Like

Hi, I was just trying to get up docker compose going and started getting the error

Unable to find image 'frigate:latest' locally

docker: Error response from daemon: pull access denied for frigate, repository does not exist or may require 'docker login'.

when I try to rebuild it with

docker build -t frigate .

I get this now (didnā€™t happen first time):

Step 5/17 : RUN GIT_SSL_NO_VERIFY=true git clone -q https://github.com/tensorflow/models /usr/local/lib/python3.5/dist-packages/tensorflow/models
 ---> Running in 773b58481072
error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
The command '/bin/sh -c GIT_SSL_NO_VERIFY=true git clone -q https://github.com/tensorflow/models /usr/local/lib/python3.5/dist-packages/tensorflow/models' returned a non-zero code: 128

Did anything change that could have caused this or is it something on my side? Now it doesnā€™t work with compose or command line.

Am running Ubuntu docker-ce and struggling with the volume setting

docker-compose.yml is run from /home/user/frigate

yet it dosnā€™t matter what format I add the path this error shows:

tensorflow.python.framework.errors_impl.FailedPreconditionError: /label_map.pbtext; Is a directory

I figured it would be:
- /home/user/frigate:/label_map.pbtext:ro

any ideas?

Hi cooloo,

It should actually be:

  • /home/user/frigate/label_map.pbtext:/label_map.pbtext:ro

You should have the label map, inside your frigate folder.

Regards

I just want to share something for those of you like me, who might try to run the container on an older cpu.

It appears that Tensorflow, starting from version 1.6.0, requires AVX instruction set capable CPU.
However, i was trying to run it on an Intel Xeon E5520 CPU, which apparently dates back to 2010 and does not support AVX.

The solution to this was to specify the version of tensorflow insdide the Dockerfile, more specifically version 1.5.0 is the last one which does not require AVX or for that matter SSE as well.

@blakeblackshear Thanks for the amazing work. I did manage to implement the mqtt authentication as well. Although running the on the aforementioned CPU, it does use quite a lot of the resources. Currently running it with 4 cores, it utilizes all of the at 30% when idle, and when scanning it jumps up to about 85 - 90 %. If there is any way, decrease the load, it would be great.

Regards

1 Like

legend @Memphisdj thank you sir that was the path

not sure how our older cpuā€™s will fair with multiple cams

but the functionality of @blakeblackshear work is worth a new system, awesome

There is also the option to off load the work on an Nvidia GPU or the new Google Coral. You can get the USB version on Mouser and the PCI-e is due out soon. Although I donā€™t know how much work in involved in adapting this setup to use it.

Would it be possible to recognise actual people in the future, so I can tell if the Post Lady is walking down the drive compared to a stranger?

I will always be looking for ways to reduce the CPU load, but I ran out of ideas. Reducing the FPS on the camera to 5 or less helps a lot. I am going to buy the new USB Coral TPU and try that out before I build a new server with a GPU.

1 Like

Yes, it should be possible to recognize specific objects with transfer learning. I am hoping to train it on my trash can so I know if I put the trash by the street on trash day. Specific people and facial recognition are possible as well. One of the features on my list is to have it save labeled images to a directory for training a new model. I have been thinking about chaining these different models together. Motion detection -> Person detection -> Bodypix -> Face detection.

1 Like

Just noticed your post on GitHub about buying a Coral. Will be interesting to see how that turns out!

Would love to be able to trigger recordings for people instead of shadows or car headlights at night. Would also be good to do triggers on people it knows. Like your Mum is at the front door. A stranger is in your Garden etc

I should be able to buy a Coral at the end of this month if it helps with testing.

Read my feature request on overlapping regions and motion detection. Will have a probably 20-30% improvement on CPU. Basically any time someone is on the border between non-overlapping regions, it will help.

How do we setup more containers for multiple cams?

do I simply git clone frigate2

and adjust the dockerfile and docker-compose yml to suit?

Iā€™ve tried a few variations but no joy

You can run several instances of the docker image. No need to checkout the repo again once you have the image locally. Just use a compose file like the following and adjust the environment variables for each camera (note that the port you expose from each container will need to be different):

  frigate_camera_1:
    container_name: frigate_camera_1
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - <path_to_frozen_detection_graph.pb>:/frozen_inference_graph.pb:ro
      - <path_to_labelmap.pbtext>:/label_map.pbtext:ro
      - <path_to_config>:/config
    ports:
      - "127.0.0.1:5000:5000"
    environment:
      RTSP_URL: "<rtsp_url>"
      REGIONS: "<box_size_1>,<x_offset_1>,<y_offset_1>,<min_person_size_1>,<min_motion_size_1>,<mask_file_1>:<box_size_2>,<x_offset_2>,<y_offset_2>,<min_person_size_2>,<min_motion_size_2>,<mask_file_2>"
      MQTT_HOST: "your.mqtthost.com"
      MQTT_USER: "username" #optional
      MQTT_PASS: "password" #optional
      MQTT_TOPIC_PREFIX: "cameras/1"
      DEBUG: "0"
  frigate_camera_2:
    container_name: frigate_camera_2
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - <path_to_frozen_detection_graph.pb>:/frozen_inference_graph.pb:ro
      - <path_to_labelmap.pbtext>:/label_map.pbtext:ro
      - <path_to_config>:/config
    ports:
      - "127.0.0.1:5001:5000"
    environment:
      RTSP_URL: "<rtsp_url>"
      REGIONS: "<box_size_1>,<x_offset_1>,<y_offset_1>,<min_person_size_1>,<min_motion_size_1>,<mask_file_1>:<box_size_2>,<x_offset_2>,<y_offset_2>,<min_person_size_2>,<min_motion_size_2>,<mask_file_2>"
      MQTT_HOST: "your.mqtthost.com"
      MQTT_USER: "username" #optional
      MQTT_PASS: "password" #optional
      MQTT_TOPIC_PREFIX: "cameras/1"
      DEBUG: "0"
2 Likes

Ok Thanks. Iā€™m getting person values in the sub-40s despite upper 90 bounding boxes. Iā€™m not sure if its camera resolution/fps/bitrate related, maybe Iā€™m driving my cpus too hard.
Iā€™ll do some more testing.

re: point 2, that makes sense.

Well Iā€™m at 5fps/2mbps/960x540/single 540 square area of interest and still only seeing mqtt values of mostly 19/39. Mqtt values seem to come in pairs. 19/39, 19/0.

Iā€™m running on server hardware so I should have enough horsepower.

Iā€™m using your dockerhub image, the one I built from github wasnā€™t generating best_person.jpgā€™s I need to try that again and open an issue if I can recreate it.

I hope you donā€™t mind, but I have added it as an issue on your GitHub to document the improvement idea like you have suggested on some other posts.

I have also added an idea to use some of the camera triggers to turn on / off frigate to save having it process every frame.

I have been unsuccessful in getting it to run with docker-compose using the sample. I get the following whether I use docker compose version 2 or 3 (itā€™s complaining about the environment section).

ERROR: yaml.parser.ParserError: while parsing a block mapping
  in "./docker-compose.yaml", line 15, column 7
expected <block end>, but found '<scalar>'
  in "./docker-compose.yaml", line 19, column 15

this is the docker compose:


version: "2"
services:

  frigate_1:
    container_name: tensorflow_right
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
    ports:
      - "127.0.0.1:5000:5000"
    environment:
      RTSP_URL: "<RTSP URL>"
      REGIONS: "720,0,0,5000,1000,720RightTopMask.bmp:720,0,559,5000,1000,720RightBottomMask.bmp"
      MQTT_HOST: "<IP ADDRESS OF MY HOST>"
      MQTT_TOPIC_PREFIX: "cameras/1ā€
      DEBUG: "0"

If I try to use the standard formatting for the environment section that works in my other docker-compose files, I get this:

Creating tensorflow_right    ... done
tensorflow_right | Traceback (most recent call last):
tensorflow_right |   File "detect_objects.py", line 247, in <module>
tensorflow_right |     main()
tensorflow_right |   File "detect_objects.py", line 47, in main
tensorflow_right |     'size': int(region_parts[0]),
tensorflow_right | ValueError: invalid literal for int() with base 10: '"720'
tensorflow_right exited with code 1

This is the config I used to get that:

  frigate1:
    container_name: tensorflow_right
    restart: unless-stopped
    image: frigate:latest
    volumes:
      - /root/frigate/label_map.pbtext:/label_map.pbtext:ro
      - /root/frigate/config:/config:ro
      - /lab/debug:/lab/debug:rw
    ports:
      - "127.0.0.1:5000:5000"
    environment:
      - RTSP_URL="<MY RTSP URL>"
      - REGIONS="720,0,0,5000,1000,720RightTopMask.bmp:720,0,559,5000,1000,720RightBottomMask.bmp"
      - MQTT_HOST="<IP OF MY HOST>"
      - MQTT_TOPIC_PREFIX="cameras/1ā€
      - DEBUG="0"

All of this works fine from the command line but not under docker-composeā€¦

Any ideas?

did you just quote like that? or is it actually in the file?
the original error shows lines 15/19 that include the <>

I dont see anything obvious. Are you running the latest versions of docker and compose? Seems like a yaml formatting issue. Maybe tabs/spaces or something? What OS are you using this on?

1 Like