Object detection using OpenCV and Darknet/YOLO

@skalavala This is not working. So, here is an example that we can all play with in the template editor. I manually set value_json to state_attr('image_processing.tensorflow_driveway','matches')

{% set value_json = {'person': [{'score': 99.69417452812195, 'box': [0.2749924957752228, 0.3772248923778534, 0.6910392045974731, 0.4704430401325226]}], 'car': [{'score': 99.01034832000732, 'box': [0.34889374375343323, 0.21685060858726501, 0.23301419615745544, 0.3547678291797638]}, {'score': 99.01034832000732, 'box': [0.54889374375343323, 0.21685060858726501, 0.23301419615745544, 0.3547678291797638]}, {'score': 98.71577620506287, 'box': [0.14932020008563995, 0.3567427694797516, 0.22214098274707794, 0.4808700978755951]}]} %}

{% set camera = 'driveway' %}
{%- set tags = value_json.keys()|list -%}
{%- for object in tags -%}
{%- for x in value_json[object]|list if x.box[0] > 0.25 -%}
{%- if loop.first %}{% elif loop.last %}, {% else %},{% endif -%}{{ object }}
{%- endfor -%}
{% endfor -%}
{{- ' detected in ' ~ camera if tags|count |int > 0 }}

returns personcar, car detected in driveway. It should, instead, return person, car detected in driveway. Further, if we try the same code with

{% set value_json = {'car': [{'score': 99.01034832000732, 'box': [0.14889374375343323, 0.21685060858726501, 0.23301419615745544, 0.3547678291797638]}, {'score': 98.71577620506287, 'box': [0.14932020008563995, 0.3567427694797516, 0.22214098274707794, 0.4808700978755951]}]} %}

we get detected in driveway. The second one should be empty.

Yep - Fixed all of them here :slight_smile:

https://github.com/skalavala/smarthome/tree/master/jinja_helpers#23-parsing-imageprocessing-json-data-and-making-sense-for-your-automations

This code is much cleaner now with the macros. It removes any duplicates, and honors the box sizes and does not show any output if nothing is returned. The previous code doesn’t honor if you have more than one items in the array that matches the criteria.

@flashoftheblades this is a great plugin. I have a couple of questions.

  1. Is there any easy way to get the coordinates for cropping the image. For instance, I have a camera in my front yard. I want to watch for a car in the driveway, not a car driving down the street.

  2. Is there a way to limit what it is searching for? For instance, I am only interested in cars and people. I have noticed it is finding all sorts of things like chairs etc… I am thinking by removing things like chairs it will take less time to process. I am thinking to accomplish this I might have to create a custom model.

Thanks again for all your help

I’m glad you’re enjoying the plugin.

  1. If you import a frame from the camera into something like ms paint, you should be able to see pixel coordinates.

  2. The only way to limit what is searched for by the model is to retrain the model. That’s beyond the scope of what I’m going to cover here, but there’s tutorials out there for training your own YOLOv3 model.

I am trying to think of a way to reduce the load on my server. I have a lot of cameras which I would like this to process. (Family Farm)… This is great but it is running constantly on images that for the most part are not changing. It would be nice to have this trigger off a sensor something that detected motion to then have it grab the image and process it. I have been fooling around with a python script that calls darknet gets the results and saves them off to a variable. I was thinking I could make a call to the HA API to update the state attributes on an entity. However, this seems clumsy any suggestions? On the other hand it also does generate the image with the boxes around the items that everyone wants.

Not sure what you’re running for your server, but the Raspberry Pi is never going to be able to really effectively process more than 1 ~ 2 cameras at a decent frame rate. Anything more and you really need something with a GPU. You can install OpenCV with CUDA/OpenCL support to be able to accelerate the processing of images. It sounds like you’re trying to make some kind of security system using homeassistant and all the cameras. I ended up using a different method for my home setup (combination of Zoneminder and some custom scripts to do the image recognition and triggering of the system).

If you do find a way to generate some kind of alert whenever the cameras sense motion, you can have homeassistant kick off the scan service for the image processor to get an immediate result instead of waiting the scan interval.

If that helps, I have motioneye setup on my raspberry and have enabled motion detection. Also notifications are enabled and have setup webhook notifications to trigger scripts on my HA instance. So I created a couple of scripts and HA is taking snapshots of the camera and saves the file.
I have this setup for a completely different reason than this topic, but I think you can implement it for your purpose.

That is how I am using it. Using an automation to trigger image processing on motion and only for the camera where motion was detected. Check my repo for relevant automations.

This is what I am doing with my MQTT approach. When a motion sensor triggers, my hacked Wyze camera takes a snapshot which is then processed via my Movidius Neural Compute stick. That has a VPU that can handle the object detection (granted not nearly as well a a NVIDIA CUDA GPU, but then again it’s a fraction of the cost). My python scripts process the image and report the class detected via MQTT to a sensor in HASS. I’m not interested in the image or bounding boxes, so I removed that portion. The image processing takes about 2 second’s give or take

Seems the git repo was taken down…has anyone worked on this further? I would be interested in playing with it and contributing.

Thanks

@flashoftheblades can you fix the manifest file to trigger opencv installation?

I think the reason I didn’t do it originally, is that unless you’re running a raspberry pi, the binaries that are available through pip install are not going to be optimized for your system (you really want them to be for good performance). This is especially true if you want to enable some of the more advanced optimizations like GPU or Movidius compute stick.

I think we are not thinking the same. I just want you to add opencv installation so we do not need to add antoher - platform: opencv, just to get working opencv library on opencv_darknet, just add it to installation wheels. Because it deletes the opencv library on every update without - platform: opencv.I am on intel nuc

So, I think I have it updated to what you’re requesting, in the dev branch of the repo. I set the dependencies in the manifest to be the same as the main OpenCV component. Part of the issue is that I already upgraded the Python install on my HA setup (installed on Raspberry Pi), and it doesn’t look like piwheels has a 3.7 build of OpenCV 4.1.0.25 yet, so I can’t entirely test it. Let me know if it works for you, and then I can promote it to the master branch.

1 Like

Uff that was fast! Thank you very much! I must also wait for new update and then test it :slight_smile: Think 0.96.2 will be soon and then will I report!

@flashoftheblades update. Fix is working!!! Thank you very much! Update to 0.96.2, which deleted opencv library, added your fix and all good.

You can push to master

I’ve been working on integrating the dev into my Docker of HA, v0.103.6.
I have the /custom_components/opencv_darknet/ with the 3 files, but my config checker keeps telling me this:
Platform error image_processing.opencv_darknet - Integration ‘opencv_darknet’ not found.

Am I missing some understanding of how to use custom_components? Using portainer, I have opencv 4.1.0 and darknet working. It exists within /config

Do I need to restart HA in order for it to recognize the component?

Maybe some people on this thread would have tips or an interest in my OpenCV Timelapse project? Timelapse Video with Computer Vision | Make timelapse with bounding boxes of detected objects using your own trained OpenCV model!

Hi

Are there any updates on Yolo object detection in HASS? Like a newer code using newer Yolo?

Hi Everyone,

Does anyone have this currently working? I have installed as described on the github, and set up as per the example (see photo) image.

It is set up with a custom model (spaghetti detector, for a 3D printer), and im seeing no errors, but i am also not seeing anything on Home assistant? Is there an entity or anything that the information gets attached to?

Any help would be really apprecciated