Google Coral - Any use other than for frigate

I am intrigued by the Google Coral USB and was wondering if anyone has used or believe the Google Coral could be used for anything in Home Assistant other than frigate?
I have a number of Reolink cameras and an array of alarm sensors. The cameras have some AI, which appreats to be quite good, but the alarm system is pretty dumb.

I would like to use AI to analyse all the inputs received from the sensors to determine if it thinks I should be informed that there is potentially an intruder about.

I would be great if we could run TensorFlow code in Home Assistant, similar to the way appdaemon runs python code.

Any ideas or suggestions?
Cheers

1 Like

For sensor-based event detection, a CPU is more suitable than a TPU. But on the other hand, the coral would be suitable for analyzing sound, screams, and the activation of sirens. But unfortunately, there is no similar open source project for sound like Frigate. Also, TPU would be suitable for voice assistants or for recognizing meter readings. But I don’t think this will appear in HA itself, usually HA does or helps friendly projects. And the connection is then carried out via mqtt. On the one hand, this is correct.

Unfortunately, Google seems to have put the Coral USB and PCI devices out as trial balloon. IMHO, they now seem to be only developing this AI silicon in their mobile phones (for edge work) and in data center mega boxes for their Gemini (read GPT and Grok) systems.

Using these USB devices with tensorflow to create useful insights on pretty much any sensor that you might have in Home Assistant was a goal of mine with I first saw the Coral, unfortunately due (mostly) to my lack of tensorflow skills, I never got very far. It would be great if a smart tensorflow folk could create a HA add on that would be able to do general analysis of sensor time series data at the Coral’s relative low power costs. Example would include: ‘Is the washer done, just from a power usage and/or vibration sensors time series stream’. And many other time series based sensors that could be created.

You might want to follow the work that Yolo folks are doing with their amazing video processing, they seemed to be pretty up on the Coral stuff for a while. Unfortunately, these engines are not getting much love. Read my rant on why using the Apple M silicon is the direction to go.

Good hunting!

I have ordered a coral usb to have a play with. Thinking a bit more about it, what I would like to do is use facial recognition to implement a kind of ‘friend or foe’, something like this:

  1. One of my reolink cameras spots a person and runs an automation to determine if that person is friend or foe.
  2. That automation then runs some kind of AI process on the video from that camera to determine if it thinks it is a friend. This would use not only face recognition, but as the face might not be visible, a process that uses size shape and walking gait to make the decision
  3. If the AI determines it is not likely a friend, then a notification is issued to me over Telegram
  4. If the AI determines it is very likely a friend, then it does not issue a notification

A friend would be identified by training the AI on a number of pictures of each person who should be considered a friend

All comments and suggestions are most welcome.
Cheers

If I understand it correctly, Tensorflow can be run with Python, so it should be possible to run it under Appdaemon, provided the Tensorflow libraries are available .

@bigbigblue If you haven’t seen the thread, some people are using a fork of the double take add-on to do this sort of facial detection. Not clear how well it works:

This stack doesn’t use the coral, though.