I’m using this python program that logs Dahua camera events to MQTT:
Thanks for the lead. I followed that link and a couple mins later ended up at the amcrest component which works for dahua cams. Well two types of mine that I tried anyway.
For anyone else interested it is here:
Okay I’m no programmer, but I’m trying to learn here. I just started today and got it working using the hassio (.103.2) addon with all default settings. Yay!
Thing is my wife’s beloved hydrangea shows up as a person in one cam and a car in the other. Same plant too! Just different angles from two different cams. Overall not very accurate.
So how do I replace a model? Looking at the install docs I see:
{
"name": "inception",
"type": "tensorflow",
"modelFile": "/share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb",
"labelFile": "/share/doods/coco_labels1.txt",
"numThreads": 1,
"numConcurrent": 1,
"hwAccel": false
}
yet when I download one from here (I’ve tried a few…maybe I’m getting wrong kind)
https://github.com/tensorflow/models/blob/3635527dc66cdfe7270e5b3086858db7307df8a3/research/object_detection/g3doc/detection_model_zoo.md
they don’t have the same file with it after extracting it. There is a .pb, but the name is different and no labels file.
I’ve even tried the one linked by the OP in post 104: New Custom Compontent - Image Processing - Object Detection - DOODS
Any suggestions on what I’m doing wrong?
Thanks
hello, I already have it working with google coral, in my case the directory was in :
./var/lib/docker/overlay2/ce6cdb5a533fd7b056e98cd6e2b004ffae2d249e7bc92c7aef17c18a91bdacdb/diff/opt/doods/models/
I had to search the .tflite file to find the estrange directory
THANK YOU!!! This is so much better than using a separate python “service”. I love having everything working through/in HA.
Confirmed this works with my Dahua Starlight cameras. Using motion detection, but now relying on binary_sensor from amcrest component instead of MQTT.
love the component, but i had ZERO success with default/tensorlite . it might be the resolution of my feeds a ~40% box inside a 2048x1536 resolution feed. detection time was 0.4 on Pi4 but result was completely useless.
switched model to this as per docs
{
"name": "inception",
"type": "tensorflow",
"modelFile": "/share/doods/faster_rcnn_inception_v2_coco_2018_01_28.pb",
"labelFile": "/share/doods/coco_labels1.txt",
"numThreads": 1,
"numConcurrent": 1,
"hwAccel": false
}
only just got it going but results appear perfect accuracy so far. people are now +90% every time. no false positives. unfortunately detection time on 4GB PI4 is ~4 seconds.
to get this model i googled and downloaded
http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
Extracted to find frozen_inference_graph.pb (~57Mb)
Renamed frozen_inference_graph.pb to faster_rcnn_inception_v2_coco_2018_01_28.pb
Copied to /share/doods/
downloaded https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-2014_2017.txt and renamed to coco_labels1.txt and copied to /share/doods/
im not sure if label file matched the model file - but works ok for me!
its only early days, but testing very impressive. i might have to offload doods to a server CPU to speed it up
OK - ive spent a LOT of time testing this and really want a result - rid myself of false positives on a handful of Hikvisions. but im not there yet.
My findings-
- Default/TensorLite with coco_ssd_mobilenet_v1_1.0_quant.tflite is hopelessly inaccurate with a complicated scene or higher res feed. Gets a bit better (but still hopeless) if you feed it an appropriate sized image. it might viable on very, very simple image (like a person on a green screen) and at a Goldilocks 300x300 resolution but thats just not a real world application. it’s fast - about 0.14secs on a PI4
- Tensorflow with “faster_rcnn_inception_v2_coco_2018_01_28.pb” models . amazing - near perfect accuracy for people on a complex shadow/wind outdoor image (havent had any false positives yet). problem is speed 6-25secs on a PI4. I tried DOODS docker on QNAP NAS with celeron J3455 and it would timeout over 60secs. I tried a DOODS docker on an Ubuntu 5yo quadcore Xeon… and it was 2x faster than PI4… 3-12 seconds which still isn’t fast enough to be viable and worse the Ubuntu docker crashes after a couple of detections. I cbf troubleshooting the reliability because i dont think the speed will get there with current model files.
so - no viable solution yet.
im considering purchasing a CORAL USB on the Hass.io PI4. but quite expensive and slow delivery to Australia.
so i’m seeking opinion before committing - am i going to be disappointed with the accuracy of a CORAL USB and respective model files?
or has anyone found a more efficient and accurate model file? i only care about PEOPLE and maybe CARS. i dont need a model file with animals/bananas etc is there a security camera focused (people and cars only) model file?
Thanks for the tips. Sounds like it should be easy to follow.
As far as the coral I could be misunderstanding this, but I was under the impression that it used the less accurate models. On a phone or I’d try to find where I read that.
There are some security cams that have people and vehicle detection. Haven’t tried one of them yet. Would love to get this to work so I don’t have to buy more cameras.
Thats right, im using coral and the models are not very precise and at night it is horrible does not recognize a person
this is a example
Could somebody offer any advice on how to install additional models when using the Hassio add on?
Looking at the notes I followed the below:
This add-on maps the /share directory by default. The default configuration uses the built in model.
Place your custom models in something like /share/doods and you can access them from within the container when you configure it.
However, when I’ve put them in /usr/share, they’re still not being found.
I’ve also tried manually copying them in to the docker container file system, however they’re wiped each time the container is restarted.
Is there an upper limit to the Scan_Interval or a special 0 value which would make it never automatically rescan? I would like to do image processing only based on my motion triggers if possible for my node-red flows.
I followed the post just a few up from yours:
The share directory is the one that sits next to the config directory in hassio.
So if you use the samba addon when you browse to your hassio over the network it is the share in the root of that. Then just make a doods folder inside it and put everything there.
The one silly mistake I made (and I’ll blame that I don’t get to mess with this stuff very often) is that I forgot to adjust the config with the new name.
After I went back into the configuration.yaml and changed the name from default to inception everything started working.
I actually haven’t been outside to try it yet, but none of my bushes or trees or random patches of mulch have been identified as people yet! This is after triggering a manual scan. Time will tell.
Hi,
I’m trying to set this up. I think it might work looking at the output from the doods logs:
|2020-01-04T23:42:21.737+0100|INFO|server/server.go:138|HTTP Request|{"status": 200, "took": 0.410011189, "request": "/detect", "method": "POST", "package": "server.request", "request-id": "d5f40609-doods/5fFMPrnTOZ-000047", "remote": "172.30.32.1:53538"}|
|---|---|---|---|---|
|2020-01-04T23:43:23.757+0100|INFO|tflite/detector.go:431|Detection Complete|{"package": "detector.tflite", "name": "default", "id": "", "duration": 0.20931495, "detections": 3, "device": null}|
However I would like to look at the output file to see what it actually sees, but I just can’t seem to find it…
This is my “fileout” in .YAML
file_out:
- "config/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
There is no file in config/tmp in the hassio/homeassistant folder (i.e. in the same /config folder as configuration.yaml) so where could it be?
I’m new to Linux, so any help would be appreciated…
If there is no folder there you could always make one.
Use samba or the configurator addon to make it match the path or modify the path in your config to match your system.
Using the inception detector takes much longer. About 30 seconds on this ~13 year old machine I’m running it on right now. Isn’t as bad with false detections, but…
I haven’t taken down my Christmas lights yet. This is a floodlight on a stake in the ground. Well now that I know how to try different models I’ll try others to see what I get.
Ah yeah that was obvious!
Found the correct share folder now and its working a treat.
I’ve moved over to the faster_rcnn_inception_v2_coco_2018_01_28.pb model now and I’m seeing much quicker results. Whereas previously with the default model the process was taking far too long which meant much of the time the subject that cause the motion had moved on already.
Been playing with this today, awesome work !
I was scanning through this thread and I was just wondering if anyone had compared DOODS to the Tensorflow component in HA? I been been using this for a few months, but was hoping to find something with better performance.
My system is running an AMD Ryzen 5 2400G with Radeon Vega Graphics. I have Unbuntu Server as my host, and am running HA, and everything else in containers. The Tensorflow component takes 1-3 seconds typically to complete. I just installed DOODS and set the component up almost identically, and using the tensorflow model it appears to have similar performance.
So, has anyone compared them in better depth? Also, has anyone been able to harness a Radeon video card (not Nvidia) in a docker container? I expect if I could get the GPU’s involved performance would sky rocket.
One final question - can the HA component do a stream detection, or only image? Didn’t find any information on it…
Cheers and Thanks!
DeadEnd
Hi,
I’m after 2 day testing of DOODS.
At first I went with default coco_ssd_mobilenet_v1_1.0_quant.tflite. It gave my person detection results at about 60-70%. But it kept detecting refrigerator in corridor.
After switching to faster_rcnn_inception_v2_coco_2018_01_28.pb i can see it takes more time to detect, it also gives over 90% results. BUT … my dogs are now detected as horses
Any suggestions? Different model?
Sorry for such a simple question but I have the HASSIO addon installed and it seems like it is working as it is detecting people and cars and stuff, however I can’t find where the images are going. My configuration is below:
file_out:
- "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
Is this supposed to be a “tmp” folder inside the “homeassistant” folder where the configuration.yaml is located or somewhere else?
Edit:
I just did a search and found it here? This seems like an odd location, is this correct? How do I get it in a more convenient location?