also, for what it’s worth, that installation guide is for non-docker. You’ll probably need to install tensorflow into your docker image that HA runs off of.
I had debug turned on - but can’t spot anything unusual in the logs.
Also tried installing it as per: Tensorflow and official docker image - CPU instruction issue
But - ended up with the same problem.
Sorry, I can’t help here. I used a special installer for tensor flow to add it to a hassio image. I did not set that up. But what you need to do is somehow add tensorflow’s libraries into your home assistant docker image.
@hunterjm Would you know where to start with installing these libraries into his docker image?
Trying to install tensorflow 2.0 - see what it does.
Weird that there’s nothing in the debug log.
Yeah but are you installing it into your Image or into the docker container? If you do the latter, it will be wiped out on next container restart.
In the actual container - my understanding is it’ll only be wiped out with an upgrade… doesn’t work anyway - same boot loop.
(confirmed - when I restart home-assistant, tensorflow 2.0 is still in the container)
Run the following command in your container and look for ANY output…If no output your container/cpu isn’t seeing the AVX instruction set (TF hardware requirement)
grep -i avx /proc/cpuinfo
Nothing returned, however, I recompiled tensflow as per the instructions I found in this thread
Still end up with a boot loop?
Would there not be some sort of error though?
It actually starts up now with the compiled version of the above.
However, I have the following errors
> Error while setting up platform tensorflow
> Traceback (most recent call last):
> File “/usr/src/app/homeassistant/helpers/entity_platform.py”, line 126, in _async_setup_platform
> SLOW_SETUP_MAX_WAIT, loop=hass.loop)
> File “/usr/local/lib/python3.7/asyncio/tasks.py”, line 416, in wait_for
> return fut.result()
> File “/usr/local/lib/python3.7/concurrent/futures/thread.py”, line 57, in run
> result = self.fn(*self.args, **self.kwargs)
> File “/usr/src/app/homeassistant/components/tensorflow/image_processing.py”, line 116, in setup_platform
> detection_graph = tf.Graph()
> AttributeError: module ‘tensorflow’ has no attribute ‘Graph’
Wait - it compiled with the python.7… let me try AGAIN… long wait ahead of me
I had a silent error where my HA instance wouldn’t start after attempting to load tensorflow (I’m running as a VM using KVM/libvirt). After some troubleshooting/googling I found the bit about TF requiring CPU that has AVX capabilities. After I enabled full passthrough of my host CPU instructions HA booted with no issues and TF happily works.
My understanding of that is that the precompiled tensor flow requires the avx instruction set - if you compile it yourself, you can do without.
(and I’m still struggling with compiling it myself - going to ask if someone can upload what they compiled in the other thread)
For what little it’s worth, I’m seeing exactly the same thing.
I’m running the Docker container for HomeAssistant 0.91.3, and I also followed the previously linked script.
If I have my image_processing
config block in, the container just constantly boot loops. The docker logs don’t show anything that jumps out as obviously related.
I’ve tried using different cameras (switching between ffmpeg and mjpeg), different Tensorflow model and I’ve recompiled the models with protoc twice just to be sure. Without fail, I get a boot loop until I remove the image_processing
block.
My host machine has AMD Turion II which I thought supported AVX but I’ll try compiling Tensorflow myself too.
I know there is not much new info in this post but just wanted @xrapidx to know you’re not alone!
@mkono87 Sadly not yet- I tried a few times but kept having different issues with missing libraries and multiple compilation failures. I’ve shelved it for now as it was just taking so much time and I already had doubts that my CPU would be able to handle it. I’ve spent so long on it that I’m contemplating moving to a NUC just to get around this issue!
Pretty much the same status as @Insanitum - spent quite a bit of time trying different methods, but none seem to work for different reasons.
@xrapidx @Insanitum I figured it out late last night. As im using proxmox and I had to set my vm to cpu type:host from its default kvm64. That allowed the vm to see all cpu instructions. What cpu are you guys using?
Running on a HP Gen8 microserver: Intel® Celeron® CPU G1610T @ 2.30GHz
@mkono87 Congrats, glad to hear you have it working!
I’m using Docker on top of Ubuntu, running on an AMD Turion II. The Turion doesn’t have AVX, so I need to compile TensorFlow without the AVX instructions, but that’s proven to be easier said than done.
I’ve seen some discussion about how big the HomeAssistant docker image is getting and TensorFlow was mentioned as a candidate to break out into a separate container; I’d hoped to be able to contribute that but even getting TF compiling is proving tough. That way, we’d all be able to precompile different flavours of TensorFlow and make it easier to just pull and link to one that fits your hardware.
As I’ve struggled with it so much, and I was already skeptical that my CPU would have enough power run TF anyway, I think I may personally end up saving up for a NUC with an i5 CPU to give me both AVX and a bit more compute power.
I bought a Dell optiplex 9100 with an i5-3570 in it. Was suppose to be just for Blue Iris but I threw Home Assistant on there and works great. I could use more ram but so far so good. Having multiple options of dockers would be fantastic.