My integration of Coral AI and Home Assistant

This is my attempt at integrating Coral AI local on-device inferencing capabilities with home assistant. My goal was to be able to detect certain objects and present the results with lovelace UI in a totally automated fashion.

This is my “Home” view:

Clicking on the “Object Detections” button gives you this view:

Clicking on the “Person” button gives you this view:

This is where you see the “person” detections from different sources. An inside Amcrest camera and outdoor Nest cameras provide the “motion” images that are inferenced and green boxes are put around the detected objects. The “confidence” of the object is selectable and is displayed along with the green box. Of course if you click on the thumbnail and a full sized image is displayed.
Here is an example of the full size “Dog” object with confidence 58.2%.

Hardware for this project includes 2 Raspberry Pi 4’s. One running Home Assistant and other running the Google coral AI USB accelerator.

Instructions

Here is how I accomplished this. My instructions will be in chronological order so that features work in the order that they are needed. Some instructions will be detailed and others less so; if someone asks I will provide more detail.

1. Install the raspberry pi 4 coral machine with the USB accelerator.

1.1. First install Debian Buster from here https://www.raspberrypi.org/downloads/raspbian/.
Change the network configuration to use a static IP address.

1.2 Install the Google coral USB accelerator and packages. Excellent instructions for this are
provided here. https://coral.ai/docs/accelerator/get-started

1.3. Install the code to “expose” the coral AI inferencing engine as per the following.
https://github.com/robmarkcole/coral-pi-rest-server I suggest you install this in your
/home/pi directory and install as a service. You should now have **/home/pi/google-rest-server

1.4. Make a folder called “python_scripts” or whatever you like in your home directory. So
/home/pi/python_scripts.

1.5. Download the following repositories and install them in the python_scripts location.

1.5.1. Install python script that periodically scans a folder to detect objects and draw
boxes around discovered objects driven by your selected parameters.
https://github.com/ijustlikeit/coral_object_detection

1.5.2. Install python script that creates html using python yattag for displaying in your
home assistant lovelace ui. https://github.com/ijustlikeit/python-scripts

1.5.3 Install python script that saves off .jpg images from gmail that contain your Nest
Cam notification emails. More on this later.
https://github.com/ijustlikeit/extract-gmail-attachments

1.5.4 All of my scripts can be found here. https://github.com/ijustlikeit?tab=repositories

1.5.5 Install Nginx (or Apache) that will serve up your html. I suggest you use Duck Dns
to create yourself a domain that will point to your static IP (that you created in step
1.1). Also create a ssl certificate for this domain using Lets Encrypt. So
you should have something like: https://yourduck.duckdns.org/yourhtml.html
serving up html form www/duckdns/ directory.

1.6 Install samba. You will need this to view the /config folder from your Home Assistant
instance.

1.7 Now perform step 2.

1.8 Install crontab.

1.9 Install your favorite email package (so you can view results of the crontab jobs).

1.10 Create a gmail account that only has the Nest camera notifications being forwarded to it.
This is easy to do with a filter.

1.11 Mount /Hassio/config:
sudo mount -t cifs //192.168.1.xx/config /mnt/Hassio -o user=u,pass=p,dom=yourworkgroup

1.12 Create a folder called gmail_attachmnts in your pi home directory.

1.13 Install the scripts into crontab using crontab -e (for 1.5.1,1.5.2, 1.5.3) change to your desired parameters and frequency of execution.

*/15 * * * * sudo python /home/pi/python_scripts/object_html_generator.py --src  /media/object_images --dst /var/www/duckdns.org/ --size 0480 270 --object 'person,car,dog'  # Object image file Generator script
*/8 * * * *  sudo python /home/pi/python_scripts/coral_image_processsing.py  --host 192.168.2.xx --port x000  --folder '/home/pi/gmail-attachmnts'  --targets 'person,car,dog,tree' --confidence 55 --save_folder /mnt/Hassio/coralsnap --timestamp true
*/6 * * * *  sudo python /home/pi/python_scripts/dlAttachments.py  --dir /home/pi/gmail- attachmnts  --thrash True

1.14 That should be about it. Start your testing. If you installed Dovecot check your email for the
script results.

2. On your Home Assistant instance install and configure the following.

2.1 Install Samba.

2.2 Install the following custom component into home assistant following the instructions within.
https://github.com/ijustlikeit/HASS-Deepstack-object

2.3 Before you try to create the sensors sensor.person , sensor.car make subfolders
coralsnap/person/ and coralsnap/car/. Normally the script will make these folders for
you but not until after you have ‘detected’ your first objects.

2.4 For viewing your results create a lovelace view:

               - name: Person
                 cards:
                     - type: iframe
                       url: https://yourduckdns.duckdns.org/person.html
                       aspect_ratio: 60%
                  panel: true
                  path: person
                  title: Person
                  visible: false

2.5 Go back and continue on from step 1.8.```

7 Likes

Very interesting work Don! I want to follow your progress!

Added the instructions.

What amazing timing. I have exactly the same setup, and just got my 2nd Pi running Coral and recognized the bird yesterday.

My project: I have a “rotating chicken cam” inside my chicken coop that can see all eggs and want to detect when there are eggs and how many. They lay 4 a day, so when the count reaches 4, I’ll know when to go foraging.

Yeah, I know, it just drove up the cost on my own egg production, but this is fun as hell!

Jeff

5 Likes

Interesting fun project! Suggestion for you. Setup two sensors in home assistant, say,
sensor.egg_count and sensor.last_egg

   - platform: command_line
     name: egg_count
     command:  'ls coralsnap/egg | wc -l'
  - platform: command_line
     name: last_egg
     command: "echo 'coralsnap/egg/'`ls -Art coralsnap/egg/ | tail -n -1 | grep .jpg`"

where ‘egg’ is one of the object targets being detected
then create an automation something like this:

- id: '1586799126242'
  alias: Egg notification
  description: ''
  trigger:
  - entity_id: sensor.egg_count
    platform: state
    to: '4'
  condition: []
  action:
  - data_template:
      data:
        images: 
        - '{{states("sensor.last_egg")}}'
      message: There are now {{states('sensor.egg_count') }} eggs in the hen house!
    service: notify.email_jeff

The automation willl notify you by email that 4 eggs are there along with the latest egg object image.

Im still struggling to get this step done:

1.3. Install the code to “expose” the coral AI inferencing engine as per the following.
https://github.com/robmarkcole/coral-pi-rest-server I suggest you install this in your
/home/pi directory and install as a service. You should now have **/home/pi/google-rest-server

It’s not very straight forward. I think I have the right edgetpu.img file downloaded… more soon.

Jeff

Correct that - IM REALLY STUCK. I Have a Pi3 and I can run the Coral demo bird app, no problem, so I got that far. Installing the AI inferencing engine as a service is convoluted. perhaps you can give me some pointers. Everything I see are Pi images that boot, etc. I don’t need any of that, I just want to install the service. so that this command will work on this new dedicated Pi3

curl -X POST -F image=@images/test-image3.jpg 'http://localhost:5000/v1/vision/detection'

I just feel like I’m going in circles on this step…
Once I get past that I think I’ll be ok.

Jeff

Ok,lets try this. Files from my setup that are working; should be the same for PI 3.

  1. Go to this repository https://github.com/ijustlikeit/pi_coral_ai_rest_server and follow the instructions in the readme.md.
    Hopefully this works for you.

ok that repository is riddled with errors (file names are not spelled correctly), but I got the gist of it.
I literally deleted everything and started over.

From the repository that someone should fix:

sudo pip3 install -r rqmnts.txt. **<-- SHOULD BE reqirements.txt**

Now try and run the app from /home/pi/google-coral/coral-pi-rest-server
$ cd /home/pi/google-coral/coral-pi-rest-server
$ sudo python3 coral-ap.py **<--- SHOULD BE coral-app.py**

Getting to the last step, I get this errors:

pi@Pihole:~/pi_coral_ai_rest_server/google-coral/coral-pi-rest-server $ sudo python3 coral-app.py
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py", line 18, in swig_import_helper
    return importlib.import_module(mname)
  File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 986, in _gcd_import
  File "<frozen importlib._bootstrap>", line 969, in _find_and_load
  File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 577, in module_from_spec
  File "<frozen importlib._bootstrap_external>", line 914, in create_module
  File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: libedgetpu.so: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "coral-app.py", line 6, in <module>
    from edgetpu.detection.engine import DetectionEngine
  File "/usr/local/lib/python3.5/dist-packages/edgetpu/detection/engine.py", line 17, in <module>
    from edgetpu.basic.basic_engine import BasicEngine
  File "/usr/local/lib/python3.5/dist-packages/edgetpu/basic/basic_engine.py", line 15, in <module>
    from edgetpu.swig.edgetpu_cpp_wrapper import BasicEngine
  File "/usr/local/lib/python3.5/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py", line 21, in <module>
    _edgetpu_cpp_wrapper = swig_import_helper()
  File "/usr/local/lib/python3.5/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py", line 20, in swig_import_helper
    return importlib.import_module('_edgetpu_cpp_wrapper')
  File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: No module named '_edgetpu_cpp_wrapper'
pi@Pihole:~/pi_coral_ai_rest_server/google-coral/coral-pi-rest-server $ 

So 2 problems with this:

  1. ImportError: libedgetpu.so: cannot open shared object file: No such file or directory
  2. ImportError: No module named ‘_edgetpu_cpp_wrapper’

So, I’m still STUCK! But I’ll try anything.

I do know the Coral device works…

pi@Pihole:~/coral/tflite/python/examples/classification $ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
124.3ms
10.4ms
10.3ms
10.4ms
10.3ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.77734

Jeff

What I supplied was my working pi 4 setup and libraries. Looks like the coral ai pi 3 libraries are in different folders than the pi 4…
So that means that repository https://github.com/ijustlikeit/pi4_coral_ai_rest_server I provided will not work for you…sorry. Not sure what to tell you about setup on pi 3.

ok good progress today. too many hours, but I’m learning a LOT along the way.

pi@raspberrypi:~/coral-pi-rest-server $ python3 coral-app.py
 Loaded engine with model : /home/pi/all_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
 * Serving Flask app "coral-app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off

But curl fails:

pi@raspberrypi:~ $ curl -X POST -F image=@images/test-image3.jpg ‘http://localhost:5000/v1/vision/detection
Warning: setting file images/test-image3.jpg failed!
curl: (26) read function returned funny value

pi@raspberrypi:~ $ curl -X POST -F [email protected]http://localhost:5000/v1/vision/detection
Warning: setting file face.jpg failed!
curl: (26) read function returned funny value
pi@raspberrypi:~ $

It cant find the images. I tried various paths in the command:

[email protected]
image=@./images/face.jpg
image=@images/face.jpg
they all fail the same way. ideas?

I can feel I’m getting closer by the day. Pretty soon eggs are going to be hot commodities, so this is timely!

Jeff

A few things to check and try:

  1. Make sure you are located in the coral-pi-rest-server directory. So you should see
pi@raspberrypi:~/coral-pi-rest-server $

if not then cd into coral-pi-rest-server then do

ls

You should see a coral-app.py etc but most importantly you should see the images directory in the list.
Now do

ls images

you should see face.jpg in the list. If it isn’t there that might be your problem. If thats the case use one of the other .jpg files in the list. If face.jpg is in the list ,great, do this:

sudo curl -X POST -F image=@images/face.jpg ‘http://localhost:5000/v1/vision/detection’

(if face.jpg WASN’T in the list use any other .jpg OR put a .jpg file of your choice in the images directory to test with.)

Note the use of sudo with the curl, don’t really know if that is necessary but just in case.

Ok, I’m finally making progress. I’ve got a 2nd Pi3 installed and detecting things, although not so well. This image

gives this:

{"predictions":[{"confidence":0.2109375,"label":"clock","x_max":269,"x_min":61,"y_max":1039,"y_min":859},
{"confidence":0.16015625,"label":"potted plant","x_max":1712,"x_min":825,"y_max":600,"y_min":32},
{"confidence":0.16015625,"label":"sports ball","x_max":1266,"x_min":873,"y_max":634,"y_min":306},
{"confidence":0.12109375,"label":"clock","x_max":1916,"x_min":33,"y_max":1080,"y_min":4},
{"confidence":0.12109375,"label":"sports ball","x_max":1173,"x_min":814,"y_max":600,"y_min":271},
{"confidence":0.12109375,"label":"potted plant","x_max":1471,"x_min":708,"y_max":651,"y_min":69},
{"confidence":0.12109375,"label":"potted plant","x_max":1850,"x_min":935,"y_max":766,"y_min":27},
{"confidence":0.08984375,"label":"scissors","x_max":955,"x_min":40,"y_max":973,"y_min":70},
{"confidence":0.08984375,"label":"potted plant","x_max":1696,"x_min":747,"y_max":882,"y_min":73}
{"confidence":0.08984375,"label":"scissors","x_max":687,"x_min":209,"y_max":997,"y_min":58}],
"success":true}

This image:


gives this

{"predictions":[{"confidence":0.99609375,"label":"orange","x_max":2070,"x_min":1010,"y_max":1909,"y_min":540}]
,"success":true}

So somehow I’m going to need to train this thing using pics of my eggs in my coop. All the training of models and how you do it that I’ve looked up are WAAAY over my head. I’m going to need some serious help from someone on that.

Assuming I can get it trained some day, in preparation of getting all this to actually recognize an egg, multiple eggs and no eggs, I’ve automated the “chicken coop cam” and it turns on the light, zooms in and takes 6 snapshots of all 6 bays and deposits them all into a folder. I’m about to try and figure out the right syntax of the object detection script that scans a folder for pics:

If you have any examples of how to use that script, I’m all ears. I get errors which I’m sure are my malformed input params.

I can then set up a counter to increment for each “found egg”. I’ll have to run that thru some other logic to eliminate false positives (read some posts a while back on false presence detection algorithms that look promising). Heck,I’ll even have it tell me which bay to look in.

It only took me a week and a half of work to get here. I’m loving my new Pi4 - it’s replaced my Windows machine in my shop - finally no more “Wating for Windows Updates”. All my HOME ASSISTANT stuff is on Pi3’s which seem to work great.

ok, back to sleep.

Jeff

Your making great progress! Too bad none of the models detect an “egg” object…I’m surprised that this common object isn’t in any of the pre-built models. (A few models I checked have 1000 prebuilt objects none of which are “egg”; one even has “eggnog” lol ).
I’m trying to train my own model but no success yet…complicated process I must say.

Running the coral_image_processing python script is as follows, hope this helps you:

sudo python /home/pi/python_scripts/coral_image_processing.py  --host 192.168.2.xx --port 5000  --folder '/home/pi/gmail-attachmnts'  --targets 'orange,clock' --confidence 20 --save_folder /mnt/Hassio/coralsnap --timestamp true

The script itself must be the first entry after the python command and must be it’s fully qualfied path (where you are keeping your scripts).
The host is the IP address where your coral USB accelerator resides.
The port is 5000 is your case.
The folder is the the fully qualified path of your input .jpg images.
The targets are the objects you wish to detect. (unfortunately “egg” wouldn’t work; no model for it). As a work around I suggest using orange,clock temporarily until “egg” can be trained, at least it will pick up something. Also you’ll have to set the confidence at 20.
The confidence is the percent at which objects will be detected with the boxes drawn around them
The save folder is the fully qualified path where your detected .jpg images will be stored.
Timestamp just set to true. (puts a timestamp in your file name).

Hi guys,

Any success with dual TPU, ESXi, home assistant on VM?

Thanks!
Didi

Don - I wanted to chime in and say thanks for taking the time to post your project (with included steps). While I have a bunch of backlogged stuff to do before I can attempt something like this, I sincerely appreciate you posting your project and having the patience to help other members out. You are a prime example of what makes this community so great. Keep up the good work.

2 Likes