Image Classification with Docker/Machinebox

Following on from the release of Facebox, I’ve just published a custom component for performing image classification using Classificationbox. You can follow the guide below to create an classifier, and then use this to create a sensor in HA which displays the most likely classification of images from a camera feed.

I created a classifier to determine whether or not I had a bird in an image, trained with 1000 images (manually sorted by placing images in 2 folders). Using the training script I determined that my classifier has an accuracy around 90%, which I find very impressive since some of the images I trained were very hard to classify. Other use cases for this component could include:

  1. Determining if a garrage door is open/closed
  2. Determining if a pet is present/not-present
  3. Determining what object is present in a camera feed

This component is a WIP so any feedback appreciated.
Cheers

image

My presentation at PyLondinium on the bird classification project is here: https://github.com/home-assistant/home-assistant-assets/blob/master/english/2018-pyLondinium/pylondinium%20Robin%20final.pdf

6 Likes

This is great I have wanting to install a pi zero with camera in the fridge and this will tie in nicely to determine if we need milk etc. Thanks for this.

1 Like

how does this compare with tagbox?

@Maaniac Hopefully Machinebox will publish a blog post on that topic (tagbox vs classificationbox). But in summary, my crude understanding is that tagbox is for ‘one-shot learning’, where you have very little data. Classificationbox requires more data but can achieve higher accuracy. Therefore you should try both for your particular application.

I’ve added instructions on how I’m using this component in my hassio config repo. Now I get Pushbullet photo notifications when the probability of a successful classification is above 80%. Capturing some funky images!

On Classificationbox vs. Tagbox:

Classificationbox = can give you more accuracy and less recall (less false positives) but you have to have more examples (starting 100 per class)

Tagbox = ideal when you have a few (1 to 10s) or you want to find visual similarity

I’ve written up my Home-Assistant project with Classificationbox here, any comments let me know!

2 Likes

Article now on Hackster:

1 Like

I’m looking for something to recognize my cats on my cameras. Ideally I would like to know if they are moving in or out of the door, but to begin with I can just take pictures when they are in frame. But the camera is quite high up from the floor. Do you think this project would work for that?

Hi yes its the same basic problem as my bird/not_bird classifier. You might want to experiment with pre-processing (in this case cropping) the image if the cat is only in a small part of the image.

Alright, I will try this out then! Btw, I wished I’d read through this before I tried installing Motion manually before my vacation some weeks ago. Didn’t know there was an addon!

1 Like

@robmarkcole,

I have successfully applied classificationbox on my kitchen camera to determine whether the kitchen is occupied or not.

I would like to do the same for my other rooms such as dining and living room.

My question is do I need to install separate classificationbox container for each cameras? Or can one container be used for multiple cameras?

Hi @masterkenobi I believe you can only expose a single model per classificationbox container, so you should spin up a seperate container instance for each model you want to use, and just assign them a unique port each. Cheers

1 Like

I tried using tagbox to determine if my two backyard gates are open or closed. I was unsuccessful.

I tried:

  • Images up close (cell phone) and from the security camera
  • Training images using one gate or both gates in a single instance

The results were best when I tried a single gate in one instance using images from the security camera included in the training sets combined with some pictures from my phone of the gate. In the end the gate was determined correctly in only some positions and the margins were always very close as you can see in the picture.

Interesting experiment. Anyone else?

Be interesting to see the two images side-by-side. You could try making the differences more obvious, e.g. by putting a marker on one side of the door so the model is trained to recognise that

Hi. If I understand this correctly, with a free license, the model is erased every time the container is restarted.

Is there any way around this? It is quite annoying having to teach my model every time I restart my server.

That’s incorrect, please read the Classificationbox docs

Sorry then.
I can’t find anything about that in the docs.
The problem l’m facing is that, every time I restart the docker container, all my models are erased.

They updated their docs since they bought bought by Veritone, but I think is should describe downloading the model file

Yes, I got that part. But I have to upload it every time I restart the container, right?