Image processing with USB acceleration - all pi! ARCHIVED

UPDATE JUNE 2020: I HAVE ARCHIVED THIS PROJECT TO FOCUS ON TFLITE-SERVER SINCE I CONCLUDED THAT USB ACCELLERATION IS NOT REQUIRED IN LIGHT OF THE GREAT PERFORMANCE OF THE RPI4

Hi all
I just published code for doing image processing using the Google Coral USB accelerator stick. This brings the power of Tensorflow object detection to the raspberry pi without any compromise to the speed of detections - detections are almost instant! For speed comparison with regular Tensorflow on the pi see this article, but detections are approximately x10 faster with the accelerator stick. Also you don’t even need to install Tensorflow on the pi to use the stick. I have structured the project in such a way that the USB stick doesn’t even need to be on the pi hosting Home Assistant, it just needs to be on a computer on your network. FYI sticks cost $75 USD.
Any feedback let me know.
Cheers!

You can use it with my Deepstack object integration in Home Assistant

22 Likes

Wow great!
I just moved to a mini-pc to start using Tensorflow and now…:joy:

By the way can this be used also under a standard x86 mini pc?
I am currently running proxmox+Debian and hass.io hence maybe not super easy.

I have a spare PI3 though: can it run on a plain Raspbian install and the connect my mini pc to it via HA as you pointed out?

Not clear though how the camera stream will be sent to the PI

I’m not entirely sure what to do with this yet, but I love what your doing!

@mspinolo yes I am running on plain old Raspbian (below) with the raspi camera, but it would also work on Hasspbian (write-up to do) and in the future hopefully as a Hassio addon. The USB stick is supported on Linux only, and my plan is to have a dedicated pi in a cupboard as my ‘model server’. My actual HA instance is in Docker on a Synology btw.
Cheers

1 Like

ok more all less all clear.

Just one point is not: can this run on a RPI (on plain raspbian) while HA is running on another PC?
if it can how is the stream of cameras handled?

not sure if it is clear what i asked

Could this be used with camera that is live streaming?

Does the Synology also support VM’s? If so, wouldnt it be easier just to run it off a linux VM, seeing as though the NAS is always running.

Had a look with more care and seems very easy to use.

I couldn’t find the list of object it can recognize like “car” or “person”.
Is there any link to refer to?

I really think I am going to buy a stick!

@mspinolo this component can be used with any HA camera by calling the scan service, it is processing individual frames on the service call - this is how all image processing components in HA work, and in practice enables processing at 1FPS which should be adequate. The image files are being posted over the network if HA and the USB stick are on different computers. If you want to do true streaming processing (e.g. at 30 FPS) there are examples for that online, but that is not what this component is designed for.

RE which objects can be detected, see the labels files on https://coral.withgoogle.com/models/ or you can train your own model (which is my plan)

@calypso I guess I could run it in a VM/Docker, will give that a try at some point, but I have a spare pi anyways

Can you use one USB stick for multiple cameras?

BTW, You are the best, I’m using your older raspberry pi tensorflow now but this looks much better.

Yes any HA camera. I haven’t stress tested it with multiple concurrent requests yet

Just ordered my stick! :slight_smile:

Plan is to put it on my RPI 3 and do some tests.
Is there an how-to describing what I should install on RPI to make this work?

On HA side should be simple!

Got links to the Google docs on install on the pi for the stick in the repo, pretty straightforward

OK so I just installed the stick on Hasspbian and it works fine alongside HA, this means I can create a hasspbian script to get everything installed and up an running with minimum user config :slight_smile:

pi@hassbian:/usr/local/lib/python3.5/dist-packages/edgetpu/demo $ sudo python3 classify_image.py --model ~/Downloads/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label ~/Downloads/inat_bird_labels.txt --image ~/Downloads/parrot.jpg
---------------------------
Ara macao (Scarlet Macaw)
Score :  0.761719

great @robmarkcole !

I should receive my stick within days, how can I help you better?
Install it on a plain Raspbian or DietPI on RPI 3 with HA on a separate PC or directly on minipc with Hass.io?

For first evaluation I would be tempted RPI3 to avoid screwing up my main setup

Raspbian or Hasspbian is straightforward install. Really Im just looking for steering on which features people would like implemented. However if your Docker is strong then a Hassio addon would be good! I am going away on holidays for 10 days

my Docker is very very weak! :smiley:

I will go Raspbian then!

so while I wait you to come back I will set it up on Raspbian and run some example just to make sure everything is set up correctly

1 Like

Love hearing that hassio is a possiblility!

4 Likes

Do you have any plans to add facial recognition?

Yes recognising specific faces is on the roadmap, to get an idea of whats involved see this article -> https://thedatamage.com/face-recognition-tensorflow-tutorial/

1 Like