Following on from the release of Facebox, I’ve just published a custom component for performing image classification using Classificationbox. You can follow the guide below to create an classifier, and then use this to create a sensor in HA which displays the most likely classification of images from a camera feed.
I created a classifier to determine whether or not I had a bird in an image, trained with 1000 images (manually sorted by placing images in 2 folders). Using the training script I determined that my classifier has an accuracy around 90%, which I find very impressive since some of the images I trained were very hard to classify. Other use cases for this component could include:
Determining if a garrage door is open/closed
Determining if a pet is present/not-present
Determining what object is present in a camera feed
This component is a WIP so any feedback appreciated.
Cheers
This is great I have wanting to install a pi zero with camera in the fridge and this will tie in nicely to determine if we need milk etc. Thanks for this.
@Maaniac Hopefully Machinebox will publish a blog post on that topic (tagbox vs classificationbox). But in summary, my crude understanding is that tagbox is for ‘one-shot learning’, where you have very little data. Classificationbox requires more data but can achieve higher accuracy. Therefore you should try both for your particular application.
I’ve added instructions on how I’m using this component in my hassio config repo. Now I get Pushbullet photo notifications when the probability of a successful classification is above 80%. Capturing some funky images!
I’m looking for something to recognize my cats on my cameras. Ideally I would like to know if they are moving in or out of the door, but to begin with I can just take pictures when they are in frame. But the camera is quite high up from the floor. Do you think this project would work for that?
Hi yes its the same basic problem as my bird/not_bird classifier. You might want to experiment with pre-processing (in this case cropping) the image if the cat is only in a small part of the image.
Alright, I will try this out then! Btw, I wished I’d read through this before I tried installing Motion manually before my vacation some weeks ago. Didn’t know there was an addon!
Hi @masterkenobi I believe you can only expose a single model per classificationbox container, so you should spin up a seperate container instance for each model you want to use, and just assign them a unique port each. Cheers
I tried using tagbox to determine if my two backyard gates are open or closed. I was unsuccessful.
I tried:
Images up close (cell phone) and from the security camera
Training images using one gate or both gates in a single instance
The results were best when I tried a single gate in one instance using images from the security camera included in the training sets combined with some pictures from my phone of the gate. In the end the gate was determined correctly in only some positions and the margins were always very close as you can see in the picture.
Be interesting to see the two images side-by-side. You could try making the differences more obvious, e.g. by putting a marker on one side of the door so the model is trained to recognise that
Sorry then.
I can’t find anything about that in the docs.
The problem l’m facing is that, every time I restart the docker container, all my models are erased.