Neural Network Human Presence Detection

Thanks so much for the quick response, I’ll give that a try and let you know how it turns out :smiley:

1 Like

Make sure when you run it, the working directory is the folder of the app. For systemd services, you can specify the working folder.

Gave that a whirl, but no luck, I also tried it on some unifi cams, but that failed to login. What cams have you previously used this on? Just want to make sure I am “square peg - round hole” this thing

Hmmm, gotcha. RTSP streams are tricky to get working with OpenCV from what I’ve seen.

The app uses OpenCV to grab the frames from the camera, and literally just uses whatever the string is you run the app with as the argument for instantiating the video capture object.

With my setup, I have an RTSP stream, but it is getting sent to MotionEye for security footage. So I simply have MotionEye create an MMJPEG stream from it, and use that in the neural app.

It adds a bit of latency to the detection (since there is already some latency in the RTSP stream, and then some additional latency from MotionEye converting it to MMJPEG), but it is minimal and still works just fine.

1 Like

Bumping this to mention that I’m working on revamping this app to work with multiple video sources soon.

For my purposes, this has been working great for my single front door camera. However, seeing as how well it’s been working so far, I’m now looking to expand my security system with more cameras. It wouldn’t be enough to simply run multiple instances of this app, one for each camera.

The reason for this is because I use the Neural Compute Stick for the inferences (as it is way less load on the CPU). The stick can only be used by a single program at a time. So, if I wanted to add the same human detection to my other cameras, I’d have to modify this app so that a single running program can share the Neural Compute Stick among the many cameras.

There has been recent developments in OpenVINO that automatically adds a lot of parallelism to the framework, including automatically distributing model inferences among multiple compute sticks. This should enable the processing of multiple video sources through the same neural model with much better performance than before.

What I will likely do is switch from the app using all command line parameters, to using a yaml file for the camera setup. This will enable a user to define all the options for each camera in a yaml file, and the app will just automatically fan out the inferences using OpenVINO.

You’d be able to define the MQTT topic for each camera separately, along with all the other configuration options like cropping, etc.

I’m likely receiving all the parts for the second camera soon, so be on the lookout for this latest release of the app. It’ll also bump the version of OpenVINO used to the latest version as well.

Hey everyone, the multi-camera update that was promised is here. Currently it compiles with OpenVino 2019.R3. I haven’t confirmed, but it should also compile with the latest 2019.R3.1 as well. The instructions to compile it all are found in the README of the repo here: https://github.com/AndBobsYourUncle/neural_security_system

You can see that there is now an extra command line argument you can use: “-cameras PATH”. This accepts the path to a YAML config file that gives the details for all the cameras you’d like to monitor.

In the repo, there is also a sample YAML file that looks like this:

cameras:
  - name: Front Door
    input: http://192.168.1.52:8081
    mqtt_topic: cameras/front_door/humans
    crop_top: 80
    crop_right: 150
    crop_bottom: 0
    crop_left: 0
  - name: Driveway
    input: http://192.168.1.52:8082
    mqtt_topic: cameras/driveway/humans
    crop_top: 0
    crop_right: 0
    crop_bottom: 0
    crop_left: 0

You can give different MQTT topics for each camera, as well as cropping values (to get rid of unwanted areas in the detection.

If you run the app without the -no_show argument, then you will get a separate CV window showing the outputs for each camera in the config.

I’ve also made sure to pay attention to making this backwards compatible, so if you do not provide a YAML file in the argument, then it will fall back to using the usual command line arguments for camera source and MQTT topics.

Behind the scenes, what this update essentially does is create an array of async inference requests, and then starts inference on them while it moves on to the next camera in the queue. Therefore, running this on even an Intel Neural Compute Stick 2 results in great framerates and performance.

Merry Christmas! Hopefully this is as useful to some out there than it is to me. :slight_smile: :+1: :beers:

1 Like

And here’s a demo showing the full multi-camera integration with triggering MotionEye recording, and playing notifications on a Google Home:

3 Likes

Great news!

I’ve got preliminary success on building this entire thing into a Docker image. If you are curious what the Dockerfile looks like, you can view the dockerized branch on the repo:

This also conveniently serves as a “how-to” for building it without Docker.

You can run it like this for using just the CPU:

docker run  \
  -v /home/nicholas/cameras.yaml:/usr/neural_security_system/cameras.yaml \
  -e CAMERAS="cameras.yaml" \
  -e MODEL="./models/tiny_yolov3/FP16/frozen_tiny_yolov3_model.xml" \
  -e DEVICE="CPU" \
  -e MQTT_USER="MQTT_USER" -e MQTT_PASSWORD="MQTT_PASSWORD" \
  -e MQTT_HOST="tcp://MQTT_HOST_IP:1883" andbobsyouruncle/neural_security_system

And then like this if you happen to have an Intel Neural Compute Stick 2 plugged into the host machine running docker:

docker run --privileged -v /dev/bus/usb:/dev/bus/usb \
  -v /home/nicholas/cameras.yaml:/usr/neural_security_system/cameras.yaml \
  -e CAMERAS="cameras.yaml" \
  -e MODEL="./models/tiny_yolov3/FP16/frozen_tiny_yolov3_model.xml" \
  -e DEVICE="MYRIAD" \
  -e MQTT_USER="MQTT_USER" -e MQTT_PASSWORD="MQTT_PASSWORD" \
  -e MQTT_HOST="tcp://MQTT_HOST_IP:1883" andbobsyouruncle/neural_security_system

You must have a cameras.yaml file on the host machine, and it should look something like this:

cameras:
  - name: Front Door
    input: http://192.168.1.52:8081
    mqtt_topic: cameras/front_door/humans
    crop_top: 80
    crop_right: 150
    crop_bottom: 0
    crop_left: 0
  - name: Driveway
    input: http://192.168.1.52:8082
    mqtt_topic: cameras/driveway/humans
    crop_top: 0
    crop_right: 0
    crop_bottom: 0
    crop_left: 0

This file should be saved, and this file is what you are mounting when you run this Docker image.

Also, if you’d like to mess with some of the other neural models provided in the image, you have all these available:

./models/tiny_yolov3/FP16/frozen_tiny_yolov3_model.xml
./models/tiny_yolov3/FP32/frozen_tiny_yolov3_model.xml
./models/yolov3/FP16/frozen_yolov3_model.xml
./models/yolov3/FP32/frozen_yolov3_model.xml

The “non-tiny” version is more accurate, but takes more resources. And the FP16 is faster than FP32, and you lose some precision. Running using the Neural Compute Stick, I just use the tiny FP16 version, and cap my FPS on the camera streams to 10 FPS. Seems to work fine with two cameras.

Let me know if you have any questions. :beers:

Honestly, the only real reason for building yourself now (because the Docker image is working well) is if you’d like to see the visual output of the app in the Ubuntu Desktop environment… or the extra control afforded to cutting out Docker.

However, if you would still like to build the app yourself in a desktop environment, I have updated the master branch and added an “easy installer” that should help someone get set up and running with a fresh Ubuntu 18.04 LTS Desktop installation.

You can find it here: https://github.com/AndBobsYourUncle/neural_security_system#instructions-for-building

Essentially, you should be able to run the easy installer script on a fresh Ubuntu installation, pass the script the OpenVINO download link and the non-root user you’d like to be the owner of the app, and it should set it all up with no issues.

The script was derived directly from the Dockerfile, and I just confirmed it worked on my own setup with the latest OpenVINO 2019R3.1 and Ubuntu Desktop 18.04 LTS.

Upgraded the repo to use OpenVINO 2020-R1, and can confirm that everything now works with that version and the master branch of my repo.

Let me know if you experience any issues. :beers:

First off this is awesome! Thank you so much for all the hard work. Would you mind sharing you’re automation to trigger MotionEye?

Hi and congrats for your great work. I have one questions. My Home Assistant is running on a docker on RPI4 with 2GB, Raspbian Buster. I have, also, a NCS2. I want to run OpenVino and all your good stuff on the same system as HA. It is possible?

You can probably get away with using the Docker image I created for the repo, and you’d have to mount the USB for the NCS2 to the Docker conainer.

When running using the NCS2, I typically don’t see much CPU or memory being consumed, so it probably would work.

Hi,

I am using intel NCS2.

if i want to use the docker version, what do i need to place instead of “MYRIAD” for the image to recognize the usb stick?

thanks,

You would still use MYRIAD as the device, since that tells the OpenVINO SDK to use the NCS2 USB stick as the processing device. You just have to pass through the USB device to the docker container to have it able to be recognized.

I’ve moved development of this over to a new repository, as well as an actual Home Assistant addon: