Wow! This is cool!
I have attached Coral to the ‘blue’ USB port of RPI4. I have set “hwAccel: true”.
Any other settings I should change? How to check if Coral is working with DOODs?
Here are my detection times:
2019-11-02T11:35:51.665+0200 INFO tflite/detector.go:273 Detection Complete {“package”: “detector.tflite”, “id”: “”, “duration”: 0.199302939, “detections”: 0}
Can you tell if this speed typical for Coral? Or is it just RPI4?
Since upgrading HA to v0.101.x, I’m now seeing the following error:
2019-11-05 14:33:35 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_frontdoor fails
Traceback (most recent call last):
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/components/image_processing/__init__.py", line 174, in async_update
await self.async_process_image(image.content)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/components/doods/image_processing.py", line 342, in process_image
if self._label_covers[label]:
KeyError: 'person'
Ok. Looks like these detection times were for RPI4. To get Coral working one needs to specify a coral-compartible model file - e.g. mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
With such a model specified and with “hwAccel: true” detection times are very different like “duration”: 0.014071279.
Question - what would be the best model for detecting cars with a high degree of precision and even at night? Anything available off the shelf?
So it’s a bit of a mixed bag. The EdgeTPU models are really fast but not as accurate. I use http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz for accuracy. It’s however very heavy and may not even be able to run on a Raspberry Pi. Youll have to experiment.
Would love some suggestions on this. I’m running DOODs on an ESXi VM on a Dell R710 (2x Intel Xeon E5630 2.53Ghz). I’ve given the VM 4x vCPUs and 16GB of memory…
When I use the default detector, I get the following:
Is there any way to reduce the detection time with TensorFlow? I’ve seen some posts about using the Google Coral, however that appears to require a different detector and isn’t as accurate as TensorFlow. I’ve tried bumping up the number of vCPUs, but they don’t appear to change anything.
Have you noticed the same times for all detections? There are the number of threads and concurrent. You should have the threads set to 4 and the concurrent set to 1-2. This basically creates 1-2 instances of tensorflow and each has 4 threads. The first detection typically takes quite a while but then the model gets cached and it speeds up.
Run a detection a few times and see if it speeds up. There is also 2 images. Latest points to the noavx image which is the most compatible. If you pick the amd64 tag it should be faster if your processor supports avx and sse4.2. I am not sure the 5630 supports avx though.
The edge TPU only supports models that are compiled for it which seems to be simplers ones.
The other thing you can try is resizing your image somehow before getting it to doods. The larger the image the longer it will take obviously.
The width and height is for the model. Some models have a fixed input width and height. If you don’t resize the image for the images will automatically be resized for you. Most of the mobilenet ones have a 300x300 image size or 224x224
Doods does not maintain aspect ratio. It just resizes at will. The idea is that any sort of image manipulation should be done before you pass it to Doods. That’s why you can see the width/height in the detectors call. It’s not ideal, but at the same time, in my experience, messing up the aspect ratio still produces okay results. I tend to prefer models like inception which take full size images, at the cost of massive CPU use. It’s a trade off that you need to play with a little. The other option would be to keep the aspect ratio but then you’re effectively loosing even more fidelity as it turns into an image with black bars at the top and bottom and even less detail. Perhaps myself or someone else can work on an enhancement to the component that if you provide a global detection area, it crops before sending to DOODS so you could specify a square area perhaps and aspect would be maintained. It could also have the benefit of being faster.
Some background first. I’m using DOODs as follows:
Camera detects motion, notifies HA via MQTT
HA, after verifying automation hasn’t been run in last 60s, calls image_processing.scan
If DOODs has returned a detection, then:
3.1. Send me a notification with the analyzed picture
3.2. Call camera.record with duration of 20s, lookback of 10s.
I don’t believe that lookback is working. I have added the stream: component, and “ticked off” “pre-load stream” in Lovelace. It doesn’t seem to matter what value I put for lookback, I still get the same delay between the initial picture saved during image_processing.scan and the video.
I’m looking for a better way to do this. My initial thought is that I should start recording a video when the camera detects motion. If DOODs returns 0 detections, I can then delete the video. Seems easy in practice, but I think I would need to write a script that HA would call to accomplish this.
Is anyone else doing something similar? Have you got lookback working for your? Any other suggestions?
For completeness, here is my .yaml to do the above:
i’ve been using the default detector, and it is super quick on my docker instance (host is i5, 12GB ram machine) but not so accurate (configwise i’m not playing with confidence scores at this point and just trying to see what the system is detecting). i have the faster_rcnn_inception_v2_coco_2018_01_28.pb model in the models folder. how can i try using this?
do i just switch the detector in my config to “tensorflow” or something else? i’m pretty new to this and a bit lost trying to get past this basic/default config
this is what i see in my doods docker log file for a detection currently: