Tensorflow step-by-step guide

Agreed. Considering it takes no code changes to support tensorflow-gpu, it seems silly to not take advantage of a supported GPU if it’s available. This also (IMO) eliminates the need for motion detection; at least, in my use-case. Of course, installation is considerably more complex but the payoff is significant.

I get the impression that tflite would require more work but that would make acceleration even more accessible with the Coral.

While I have a development background, I have little experience with python and none coding in the HA environment. While it would seem sensible to make tensorflow and tensorflow-gpu interchangeable just based on which back-end you have installed, it doesn’t appear that HA really supports being able to do that from a cursory look at how manifest.json is implemented. At least not without pulling the core requirements that differ which would pretty much make the manifest useless.

Does it make sense to create tensorflow-gpu and tensorflow-lite components separate from tensorflow or is there a better approach you’re aware of?

I wonder, too, if Tensorflow Lite might be a way to bring object recognition to HASS.IO?

Personally I think tflite should be the default option, owing to it performing well on most hardware. I think we should review once tensorflow 2.0 is in wide use

This seems reasonable, especially in light of this article that talks about how much better TensorFlow lite runs on a Pi 3+ compared to full TensorFlow, even without a Coral. Considering how many HA users are running on Pi’s, it honestly seems counterintuitive to have built the component based on full TensorFlow.

Digging a bit more, it does appear that the tensorflow-gpu installation can still support CPU inferencing. I’d like to test this further and if so, I think there really should just be two options:

  • tensorflow - which becomes tensorflow-lite and possibly could be included in hass.io (this is outside my experience whether that would be practical or not). This would be the preferred option on Pi or lower-horsepower machines along with mobile GPUs.
  • tensorflow-full - (or some other meaningful name) based on tensorflow-gpu, supporting higher-end CPUs, GPUs and models not optimized for Lite. This would be, of course, predicated on whether tensorflow-gpu continues to support CPU inferencing (I can test this easily enough).

The more I think about it, though, the second use-case is more of an edge case. There are really only a handful of use-cases I can think of where it would make sense:

  • HA is running on a low-end workstation or laptop that has an NVIDIA GPU where TensorFlow Lite simply won’t do the job
  • There is some reason to use a model that TensorFlow Lite doesn’t support (I don’t know enough to know if this is even likely)
  • The user (like me) has enough workload that CPU-based inferencing simply won’t do the job and a low-end GPU supported by Lite (like a Coral) isn’t sufficient.

I can see the latter being a bit more common if this is pushed as a replacement for motion detection but how many people would be willing to invest in an NVIDIA GPU (supporting CUDA) to achieve that goal? Never mind the fact that there are workarounds for this use-case (such as lower-resolution images).

Of course, if TensorFlow Lite is updated to support GPUs beyond mobile (especially ATI), the argument for full TensorFlow becomes pretty thin.

I think it’s a no-brainer to switch to TensorFlow Lite; I’m curious as to what others think about continuing to support full TensorFlow in its tensorflow-gpu form to be able to support NVIDIA GPUs.

Hi am Easwaran here, Am a newbie
I have successfully installed Tensorflow in my Ubuntu docker container
Also configured Home Assitant Tensor flow component.
I use i5 machine with 4GB RAM
When I run Scan.image-processing service, I don’t see any data that say it has scanned an image. Mates/Summary etc are always empty
Can you help me to debug what’s going wrong?

My HA Tensor flow entity linked to balcony IP camera gives no values:
image

image_processing:
  - platform: tensorflow    
    source:    
      - entity_id: camera.balcony_camera    
    model:    
      graph: /config/tensorflow/frozen_inference_graph.pb

Graph.pb file location under my Home assistant config is located as below:

image

Thanks in advance

Also i tried to download Inception_V4 model from https://www.tensorflow.org/lite/guide/hosted_models](http://)
i see these many files were to copy these/to which folder/different folder(s) files in Homeassitant? Please enlighten

image

@easwaran83 it appears you have config correct etc. Couple of things to investigate:

  1. Check tensorflow can be imported - you would need to activate the venv used by home-assistant
  2. Maybe RAM is too low, considering you are running docker. Check the performance metrics of the machine and see what is happening to them when you call the scan service

Unfortunatly debugging issues involving docker can be a real rats arse

1 Like

Got Tensorflow running on a Hassio NUC but how do I train it to ALSO detect specific things like Fedex trucks and family faces?

Face recognition requires a specific model, and differs from general object detection. For adding new classes of object detection you will need a training data set etc, and there are a few tutorials online, e.g this one

Thanks - I’ll focus on object detection for now but will need to find a better guide (like one that step by step walks me through how to extend an ssd_mobilenet_v2_coco model with my own categories). Any additional references is appreciated. Thanks again.

Well there are literally dozens of tutorials. If you have a coral stick checkout the docs

Just a stupid question, i can’t find the tensorflow saved image (i put the line “//config/www/tmp/{{ camera_entity.split(’.’)[1] }}_latest.jpg” or “/tmp/{{ camera_entity.split(’.’)[1] }}_latest.jpg” with no luck)… What i’m doing wrong? It saves only when something is recognized?

check the path from the terminal and paste the full path

No way… I put the full path (/usr/share/hassio/homeassistant/www/tmp/{{ camera_entity.split(’.’)[1] }}_latest.jpg) but nothing, the directory is still empty…

Solved, it seems that save images only when tensorflow found something…
New question: how to choose the right modelzoo? There’s a cat coming to my porch usually and was never detected, currently i’m using faster_rcnn_inception_v2_coco

You just have to try the different models, no shortcut I am afraid

I wosh to identify a person but i cannot do with face recognition (my cam is setted to high to view the face from a font view). There’s a way to do that?

did you find any solution, still having the same issue

Hi @robmarkcole , Thanks for your great contribution, i move from deepstack to tensorflow lite recently. I have a NUC i5 running windows and using your Tensor flow lite rest api server native on python 3.7.
and Hass Tensor flow custom component.
so i realize some issue. that my dogs and sometimes a plant was recognized as Person.
then I raise the confidence level to 90, and same problem, the 95 and same, and sometimes i had Boxes on pictures and sometimes not. Actually when i wanted to know there the box was absent
so I edited the flite_sever.py int he Rest Server and in line 28 i change
CONFIDENCE_THRESHOLD = 0.8 (from 0.3)
and problem fixed. (patch fix)
i dont know much about programing but i think that doesnt matter what Confidence is set in hassio tensorflow component if score is >= 0.3 (tensorflow api rest server) it will show Person detection. and if score >= Confidence set it will show the box.

Hope you can fix it, again thank for your work…

basically all the AI services will make mistakes in some cases. I need to give the tflite integration some love

I have been running dLib and have found it slow and not all that reliable. I have come to the realization that running it within a skylake single thread VM alongside many other processes would be slow. Now I am wondering, with either a compute stick or a google coral, which one of deepstack or tensor flow is the best option for both face and object detection?