Thanks, I installed HA manually on Pi instead of docker and it’s working now. Does it work with Google coral to make detection faster? My goal is to use multiple cameras’ snapshot to detect people for security notification.
Just got the basics set up and got some detections.
How/where do I access the file out images from the tmp folder? And is there information elsewhere someone can point me to for reference? My ha instance is also in a docker container. Do I need to map a local volume to my docker instance so I can access then remotely?
This is my first time attempting to work with snapshots
figured it out. i needed to add my folder to the whitelist external dir in my configuration.yaml
I have used DOODS with the Coral stick. Its very fast at detection but not as accurate as some of the other ones. I use the tensorflow (not lite) faster_rcnn_inception_v2_coco model usually as it’s very accurate. It struggles a little bit on a Pi though and takes up to 10-12 seconds per detection.
10-12 seconds per detection with Coral stick or without on Pi using tensorflow?
On Pi without Coral faster_rcnn_inception_v2_coco takes 10-12 seconds.
And now it has support for aarch64. I compiled it using settings for an ODroid C2… Hopefully that will work on most aarch64 devices.
If you want to try your use case again, the options covers
is now available. If you set covers: false
if the object detected is anywhere in your defined area it will trigger vs requiring it to be completely inside the area.
And how long does it take to detect with Coral?
I have not been able to find an inception model that works with the Coral. I think it’s too complicated or something. I would love to be able to use an Inception/COCO model with Coral so if anyone finds it, let me know. For me it’s been, by far, the most accurate. I think inception will take any size image though so it may be too large to work with Coral.
The labels are supported by the models. there’s usually a labels file that goes along with the model listing what it knows how to detect.
Having a hard time after switching to the tesnorflow model, getting this back in HA logs.
error converting image channels attribute 3 does not match bits per pixel from file 31613507
[[{{node DecodeBmp}}]]
I’m using the custom component version, and a tensorflow model, sourcing the image from a dafang camera. looks like it might be getting cranky about the encoding on the image im feeding it?
Can it be used with Coral connected to RPI4?
Yes, it works.
Wow! This is cool!
I have attached Coral to the ‘blue’ USB port of RPI4. I have set “hwAccel: true”.
Any other settings I should change? How to check if Coral is working with DOODs?
Here are my detection times:
2019-11-02T11:35:51.665+0200 INFO tflite/detector.go:273 Detection Complete {“package”: “detector.tflite”, “id”: “”, “duration”: 0.199302939, “detections”: 0}
Can you tell if this speed typical for Coral? Or is it just RPI4?
Zach, it would be great if you could clarify the purpose of “numThreads”: 1, “numConcurrent”: 1,
Thanks!
Since upgrading HA to v0.101.x, I’m now seeing the following error:
2019-11-05 14:33:35 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.doods_frontdoor fails
Traceback (most recent call last):
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity.py", line 270, in async_update_ha_state
await self.async_device_update()
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity.py", line 448, in async_device_update
await self.async_update()
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/components/image_processing/__init__.py", line 174, in async_update
await self.async_process_image(image.content)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/homeassistant/lib/python3.6/site-packages/homeassistant/components/doods/image_processing.py", line 342, in process_image
if self._label_covers[label]:
KeyError: 'person'
My image_processing.yaml looks like:
- platform: doods
scan_interval: 10000
url: "http://192.168.2.9:8080"
detector: default
file_out:
- "/opt/homeassistant/config/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
- "/mountpoint/Homeassistant/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
source:
- entity_id: camera.driveway
- entity_id: camera.frontdoor
- entity_id: camera.garage
- entity_id: camera.playset
- entity_id: camera.pool
confidence: 20
labels:
- person
- car
- truck
Do I need to bite the bullet and upgrade python? Is there something else happening here?