I created an image processing component that uses Darkflow to do object detection on camera feeds. I took inspiration from the existing OpenCV component and modified it with a few more bells and whistles. I would like to get this added into the main application, but right now Darkflow isn’t in pypi.
I have it running on a raspberry pi 3 with a scan interval of 30 seconds and haven’t noticed performance issues. It could probably scan a bit more frequently, but I’ve been using it to detect when my dog is waiting at the back door to come in, and it seem to work fairly well.
Please try it out and let me know what you think. This is my first attempt at writing a component for Home Assistant.
Actually, earlier versions of my code did use OpenCV for doing the image capture. I ended up swapping it out for Pillow, so as to reduce the number of difficult dependencies. I think when I installed in on the Raspberry Pi, I ended up compiling from source, but on my test machines I used the package you linked. What sort of issues are you having?
Thank you for you help. I downloaded all the files I got the weight file (the full one) from Google Drive and then I downloaded the coco.names and tiny-yolo-voc.cfg python 3 version from thtrieu.
I then followed your instructions in the new link. I am running Ubuntu so I installed opencv by running sudo apt-get install python-opencv. I then created the custom component and configured it to watch one of my cameras. Home assistant is at least now starting and I can see the image-processing component. However, it is not matching anything. Is there something else that I need to do?
I was expecting to see something like matches {person} total matches: 1.
What are you running this on? I have tinyyolo running on a movidius compute stick and it’s recognizing all objects but I’m having trouble getting the results from the stick to Home Assistant
Are you seeing any kind of error in the log about the component? I’m assuming you have HA running in a Python venv. If you installed OpenCV through apt, it probably didn’t create a symbolic link in the HA venv, so it wouldn’t know where to find OpenCV library.
Other thing I should mention is that if you’re using the new opencv_darknet component, I wouldn’t use the tiny-yolo-voc models. You can now use the YOLOv3-tiny model which works a lot better. I added a link to where to find them in the other post. You can still use the same coco.names file.
As an aside, it might be better to comment on issues with the new opencv_darknet component in the new thread I made, that way we don’t mix up issues between the 2 different components.