Neural Network Human Presence Detection

Hey everyone!

I’d like to share this cool project I’ve been working on with all of you. This is a C++ program that uses the Intel OpenVINO toolkit to process a webcam through a neural network. To use it, you’d have to install the toolkit in a VM, and most likely create a systemd service to run the program.

I have it now successfully working to scan a security camera for people, and publish to an MQTT topic when people are detected. There is then an automation that triggers MotionEye to record or stop recording based on that binary sensor.

The best part of it all is that OpenVINO can use the Intel Neural Compute Stick to process video through a neural network a lot more efficiently than a CPU.


Here’s a video demo:

1 Like

Great tool wish to try it, unfortunately can’t download the software from intel website

EDIT: if youi have pihole you need to disable it, then you can registe r and download.

Got this error

ubuntu:~/paho.mqtt.c/paho.mqtt.cpp$ cmake -Bbuild -H. -DPAHO_WITH_SSL=ON -DPAHO_BUILD_SHARED=ON
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:31 (project):
  No CMAKE_CXX_COMPILER could be found.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.

-- Configuring incomplete, errors occurred!
See also "/home/nuc/paho.mqtt.c/paho.mqtt.cpp/build/CMakeFiles/CMakeOutput.log".
See also "/home/nuc/paho.mqtt.c/paho.mqtt.cpp/build/CMakeFiles/CMakeError.log".

Great job, I won’t be able to use it (no cameras) but want to thank you for the in depth video showing how it’s working but also why you are using it rather than just motion detection (shadows).

Have you tried compiling the paho client only after installing OpenVINO? I believe you’re going to need all the packages that get installed as part of that process up until the point that you’ve built the samples and confirmed OpenVINO to be working. That process installs CMake, among others.

Are you also using Ubuntu 16.04 as recommended by Intel?

Per my repos instructions, if you end up running ‘./’ as a regular user (not sudo), it will all end up in “~/intel” and not “/opt/intel”

Make sure you’ve added the “setupvars” to your bashrc and re logged in to your user to initialize the environment variables.

Hi, that error came because I did not install it all before, sorry about that. Now I am stuck here. I did

sudo ./ install .sh

with regukar user (I think), and it exited without any error.

But there is no opt/intel/openvino directory, so the command
source /opt/intel/openvino/bin/

does not work (also under /home/myusername/intel there is no openvino directory
Now I can’t check more, since my internet is down

Are you using the desktop version of Ubuntu 16.04?

When you run the install for OpenVINO, use the command ‘./’ without the sudo.

It should bring up the installation window.

I have Ubuntu 18.04.1 LTS. But I connect to it mainly via SSH (its headless at the moment), that’s why I was using the non gui version. If mandatory I may attach it to a monitor (failed to install vncserver some time ago for I don’t remember which reason)

Ah yes, that might be it then. OpenVINO is not confirmed to be compatible with anything except Ubuntu Desktop 16.04 LTS

for anyone interested there is this, I will try later when my Internet comes back

After a bit of persuasion, I’ve managed to install and build everything.

My command is:

./neural_security_system -i -m ./models/yolov3/FP32/frozen_yolov3_model.xml -d CPU -t 0.2 -u [REDACTED] -p [REDACTED] -tp cameras/inside/humans -no_image -mh tcp://

I’m getting the following output:

[ INFO ] Parsing input parameters
MQTT Username: CPU
Connecting to server 'tcp://'...OK

	API version ............ 1.4
	Build .................. 19154
[ INFO ] Reading input
[ INFO ] Loading plugin

	API version ............ 1.5
	Build .................. lnx_20181004
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
Illegal instruction (core dumped)

If I try FP16 (with either Yolo or Tiny Yolo) I get this:

[ INFO ] Parsing input parameters
MQTT Username: CPU
Connecting to server 'tcp://'...OK

	API version ............ 1.4
	Build .................. 19154
[ INFO ] Reading input
[ INFO ] Loading plugin

	API version ............ 1.5
	Build .................. lnx_20181004
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ ERROR ] The plugin does not support FP16

Do you know what the cause may be, or how I can output any additional logging?

Also, just to note (it may only be me). In the README for building the models, there seems to be some additional spaces after the end of line \'s which breaks the multi-line copy & paste.

What do you suggets as hardware for 4 (maybe later addiitonal 4, for a total 8) cameras? Do you use a stick with your NUC?

Currently I have a NUC i3: trying Facebox (on 1 camera) and Deepstack on 4 cameras. But they are not totally accurate and slow to trigger presence.

The cameras motion detection is not effective because has a lot of false positive (shadows , birds, and so on).

p.s. in your video in the top corner you could see a person walking: is that person detected?

1 Like

Hi, how did you manage to build the YOLOv3 FP32 version.

If I run the command:

python3 ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/ --input_model frozen_yolov3_model.pb --tensorflow_use_custom_operations_config yolo_v3_changed.json --input_shape [1,416,416,3]

I get the follow error:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/chuchete/Downloads/tensorflow-yolo-v3/frozen_yolov3_model.pb
- Path for generated IR: /home/chuchete/Downloads/tensorflow-yolo-v3/.
- IR output name: frozen_yolov3_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,416,416,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: /home/chuchete/Downloads/tensorflow-yolo-v3/yolo_v3_changed.json
Model Optimizer version:
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] LeakyRelu (72)
[ ERROR ] detector/darknet-53/Conv/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_1/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_2/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_3/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_4/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_5/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_6/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_7/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_8/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_9/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_10/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_11/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_12/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_13/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_14/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_15/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_16/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_17/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_18/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_19/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_20/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_21/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_22/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_23/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_24/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_25/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_26/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_27/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_28/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_29/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_30/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_31/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_32/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_33/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_34/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_35/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_36/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_37/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_38/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_39/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_40/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_41/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_42/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_43/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_44/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_45/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_46/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_47/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_48/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_49/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_50/LeakyRelu
[ ERROR ] detector/darknet-53/Conv_51/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_1/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_2/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_3/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_4/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_7/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_8/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_9/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_10/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_11/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_12/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_13/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_15/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_16/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_17/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_18/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_19/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_20/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_21/LeakyRelu
[ ERROR ] detector/yolo-v3/Conv_5/LeakyRelu
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.

EDIT1: I finally got it running after tensorflow downgrade:
pip3 install tensorflow==1.12.0 --upgrade

This is a known issue with TensorFlow 1.13.1. Downgrade to 1.12.0 with the following command, and then start the model build again from the convert_weights_pb line

pip3 install tensorflow==1.12.0 --upgrade

If I use the model: ./models/tiny_yolov3/FP32/frozen_tiny_yolov3_model.xml I get a fegmentation fault. Using the model ./models/yolov3/FP32/frozen_yolov3_model.xml then it works ok

Have you successfully installed all prerequisites and ran the samples from the OpenVINO installation instructions? If so, you must have a copy of built somewhere. Have you copied that into my repos cloned lib folder?

I do use a Neural Compute Stick 2. I have gotten great accuracy with the Tiny YOLOv3 model so far. For instance, I had to crop the very top of the camera feed off because it was detecting people walking on the other side of the street (which looked like blurry blobs).

As far as FPS, I’m getting upwards of almost 30 FPS with the compute stick. You’d likely get about the same with CPU. The main reason I use the compute stick is because it doesn’t make my Intel NUC ramp it’s CPU fan up constantly (100% CPU utilization of the VM CPU)

If you’re going to go the route of many cameras, I’d go for the compute sticks. You might need a few of them. My next step for this project is to add my backyard camera and see about parallelization within my C++ application.

Have you copied your own file after running OpenVINO’s samples?

If so, could you show your output?

This looks very interesting. Is all the processing off-loaded to the compute stick? I have a fairly low powered celeron server with 4mb ram. For ha, node red etc it’s fine but I’d love to do something like this and it wouldn’t have the grunt.