love the component, but i had ZERO success with default/tensorlite . it might be the resolution of my feeds a ~40% box inside a 2048x1536 resolution feed. detection time was 0.4 on Pi4 but result was completely useless.
only just got it going but results appear perfect accuracy so far. people are now +90% every time. no false positives. unfortunately detection time on 4GB PI4 is ~4 seconds.
im not sure if label file matched the model file - but works ok for me!
its only early days, but testing very impressive. i might have to offload doods to a server CPU to speed it up
OK - ive spent a LOT of time testing this and really want a result - rid myself of false positives on a handful of Hikvisions. but im not there yet.
My findings-
Default/TensorLite with coco_ssd_mobilenet_v1_1.0_quant.tflite is hopelessly inaccurate with a complicated scene or higher res feed. Gets a bit better (but still hopeless) if you feed it an appropriate sized image. it might viable on very, very simple image (like a person on a green screen) and at a Goldilocks 300x300 resolution but thats just not a real world application. it’s fast - about 0.14secs on a PI4
Tensorflow with “faster_rcnn_inception_v2_coco_2018_01_28.pb” models . amazing - near perfect accuracy for people on a complex shadow/wind outdoor image (havent had any false positives yet). problem is speed 6-25secs on a PI4. I tried DOODS docker on QNAP NAS with celeron J3455 and it would timeout over 60secs. I tried a DOODS docker on an Ubuntu 5yo quadcore Xeon… and it was 2x faster than PI4… 3-12 seconds which still isn’t fast enough to be viable and worse the Ubuntu docker crashes after a couple of detections. I cbf troubleshooting the reliability because i dont think the speed will get there with current model files.
so - no viable solution yet.
im considering purchasing a CORAL USB on the Hass.io PI4. but quite expensive and slow delivery to Australia.
so i’m seeking opinion before committing - am i going to be disappointed with the accuracy of a CORAL USB and respective model files?
or has anyone found a more efficient and accurate model file? i only care about PEOPLE and maybe CARS. i dont need a model file with animals/bananas etc is there a security camera focused (people and cars only) model file?
Thanks for the tips. Sounds like it should be easy to follow.
As far as the coral I could be misunderstanding this, but I was under the impression that it used the less accurate models. On a phone or I’d try to find where I read that.
There are some security cams that have people and vehicle detection. Haven’t tried one of them yet. Would love to get this to work so I don’t have to buy more cameras.
Could somebody offer any advice on how to install additional models when using the Hassio add on?
Looking at the notes I followed the below:
This add-on maps the /share directory by default. The default configuration uses the built in model.
Place your custom models in something like /share/doods and you can access them from within the container when you configure it.
However, when I’ve put them in /usr/share, they’re still not being found.
I’ve also tried manually copying them in to the docker container file system, however they’re wiped each time the container is restarted.
Is there an upper limit to the Scan_Interval or a special 0 value which would make it never automatically rescan? I would like to do image processing only based on my motion triggers if possible for my node-red flows.
The share directory is the one that sits next to the config directory in hassio.
So if you use the samba addon when you browse to your hassio over the network it is the share in the root of that. Then just make a doods folder inside it and put everything there.
The one silly mistake I made (and I’ll blame that I don’t get to mess with this stuff very often) is that I forgot to adjust the config with the new name.
After I went back into the configuration.yaml and changed the name from default to inception everything started working.
I actually haven’t been outside to try it yet, but none of my bushes or trees or random patches of mulch have been identified as people yet! This is after triggering a manual scan. Time will tell.
If there is no folder there you could always make one.
Use samba or the configurator addon to make it match the path or modify the path in your config to match your system.
Using the inception detector takes much longer. About 30 seconds on this ~13 year old machine I’m running it on right now. Isn’t as bad with false detections, but…
I haven’t taken down my Christmas lights yet. This is a floodlight on a stake in the ground. Well now that I know how to try different models I’ll try others to see what I get.
Found the correct share folder now and its working a treat.
I’ve moved over to the faster_rcnn_inception_v2_coco_2018_01_28.pb model now and I’m seeing much quicker results. Whereas previously with the default model the process was taking far too long which meant much of the time the subject that cause the motion had moved on already.
I was scanning through this thread and I was just wondering if anyone had compared DOODS to the Tensorflow component in HA? I been been using this for a few months, but was hoping to find something with better performance.
My system is running an AMD Ryzen 5 2400G with Radeon Vega Graphics. I have Unbuntu Server as my host, and am running HA, and everything else in containers. The Tensorflow component takes 1-3 seconds typically to complete. I just installed DOODS and set the component up almost identically, and using the tensorflow model it appears to have similar performance.
So, has anyone compared them in better depth? Also, has anyone been able to harness a Radeon video card (not Nvidia) in a docker container? I expect if I could get the GPU’s involved performance would sky rocket.
One final question - can the HA component do a stream detection, or only image? Didn’t find any information on it…
I’m after 2 day testing of DOODS.
At first I went with default coco_ssd_mobilenet_v1_1.0_quant.tflite. It gave my person detection results at about 60-70%. But it kept detecting refrigerator in corridor.
After switching to faster_rcnn_inception_v2_coco_2018_01_28.pb i can see it takes more time to detect, it also gives over 90% results. BUT … my dogs are now detected as horses
Sorry for such a simple question but I have the HASSIO addon installed and it seems like it is working as it is detecting people and cars and stuff, however I can’t find where the images are going. My configuration is below:
@DeadEnd you can’t use Radoen GPU’s yet. The default tag in docker uses basically no CPU features to speed up processing. If you are running it in docker you can use the amd64 tag and it uses a few more CPU features that might speed up processing.
@MrUkleja it is what it is. You could get a different dog. A different model is not likely to help you. That’s one of the most accurate models IMO.