I’d like to share this cool project I’ve been working on with all of you. This is a C++ program that uses the Intel OpenVINO toolkit to process a webcam through a neural network. To use it, you’d have to install the toolkit in a VM, and most likely create a systemd service to run the program.
I have it now successfully working to scan a security camera for people, and publish to an MQTT topic when people are detected. There is then an automation that triggers MotionEye to record or stop recording based on that binary sensor.
The best part of it all is that OpenVINO can use the Intel Neural Compute Stick to process video through a neural network a lot more efficiently than a CPU.
EDIT: if youi have pihole you need to disable it, then you can registe r and download.
Got this error
ubuntu:~/paho.mqtt.c/paho.mqtt.cpp$ cmake -Bbuild -H. -DPAHO_WITH_SSL=ON -DPAHO_BUILD_SHARED=ON
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:31 (project):
No CMAKE_CXX_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
to the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
See also "/home/nuc/paho.mqtt.c/paho.mqtt.cpp/build/CMakeFiles/CMakeOutput.log".
See also "/home/nuc/paho.mqtt.c/paho.mqtt.cpp/build/CMakeFiles/CMakeError.log".
Great job, I won’t be able to use it (no cameras) but want to thank you for the in depth video showing how it’s working but also why you are using it rather than just motion detection (shadows).
Have you tried compiling the paho client only after installing OpenVINO? I believe you’re going to need all the packages that get installed as part of that process up until the point that you’ve built the samples and confirmed OpenVINO to be working. That process installs CMake, among others.
Are you also using Ubuntu 16.04 as recommended by Intel?
I have Ubuntu 18.04.1 LTS. But I connect to it mainly via SSH (its headless at the moment), that’s why I was using the non gui version. If mandatory I may attach it to a monitor (failed to install vncserver some time ago for I don’t remember which reason)
[ INFO ] Parsing input parameters
MQTT Username: CPU
Connecting to server 'tcp://192.168.1.71:1883'...OK
InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Reading input
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. lnx_20181004
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to 1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
Illegal instruction (core dumped)
If I try FP16 (with either Yolo or Tiny Yolo) I get this:
[ INFO ] Parsing input parameters
MQTT Username: CPU
Connecting to server 'tcp://192.168.1.71:1883'...OK
InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Reading input
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. lnx_20181004
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to 1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the plugin
[ ERROR ] The plugin does not support FP16
/teamcity/work/scoring_engine_build/releases_2018_R5/src/mkldnn_plugin/mkldnn_graph.cpp:310
~/intel/computer_vision_sdk/deployment_tools/inference_engine/include/details/ie_exception_conversion.hpp:71
Do you know what the cause may be, or how I can output any additional logging?
Also, just to note (it may only be me). In the README for building the models, there seems to be some additional spaces after the end of line \'s which breaks the multi-line copy & paste.
This is a known issue with TensorFlow 1.13.1. Downgrade to 1.12.0 with the following command, and then start the model build again from the convert_weights_pb line
If I use the model: ./models/tiny_yolov3/FP32/frozen_tiny_yolov3_model.xml I get a fegmentation fault. Using the model ./models/yolov3/FP32/frozen_yolov3_model.xml then it works ok
Have you successfully installed all prerequisites and ran the samples from the OpenVINO installation instructions? If so, you must have a copy of libcpu_extension.so built somewhere. Have you copied that into my repos cloned lib folder?
I do use a Neural Compute Stick 2. I have gotten great accuracy with the Tiny YOLOv3 model so far. For instance, I had to crop the very top of the camera feed off because it was detecting people walking on the other side of the street (which looked like blurry blobs).
As far as FPS, I’m getting upwards of almost 30 FPS with the compute stick. You’d likely get about the same with CPU. The main reason I use the compute stick is because it doesn’t make my Intel NUC ramp it’s CPU fan up constantly (100% CPU utilization of the VM CPU)
If you’re going to go the route of many cameras, I’d go for the compute sticks. You might need a few of them. My next step for this project is to add my backyard camera and see about parallelization within my C++ application.
This looks very interesting. Is all the processing off-loaded to the compute stick? I have a fairly low powered celeron server with 4mb ram. For ha, node red etc it’s fine but I’d love to do something like this and it wouldn’t have the grunt.