You can see in logs the processing time. You do not want to exceed the time it take for doods to receive and process the images the return processed images back.
I send images to doods every second for (x3)cam and every 3-5seconds for (x5) cam. Full processing and return for one cam is about .05 second or less(I forget since I not check in a while). In the end I have no issues but on the occasion it exceeds the time an error occur and it stop processing the image (you miss the object).
Previously I have slower server and slower Coral and many times doods would timeout and not return image. When that occur I reduce the time between images to resolve. Basically the timing you may use depends on hardware but you cannot exceed the time doods needs to complete processing.
That’s fair enough, I was just making an assumption there and clearly I was wrong! You know what they say…To assume makes an ass out of u and me (or in this case, just me )
Okay my mistake…ignore what I said.
I went back and looked and it was because I had named the detector inception that I needed to call it that in my configuration.yaml.
Let us know if you get it going with the GPU.
Not sure I can until Zach releases an add-on version of it though. I’m using supervised and his add-on.
Don’t really want to add a bunch of containers on the host.
After fighting for a couple days to figure this out enough to get things up and running I thought I had things perfect with the tensor flow model but all through the early AM it was picking up the side of my car as a person. Unfortunately with higher confidence than it picks up a person at night. Had anyone come up with a clever solution to this? (confidence in config set to 50 and only set to detect person)
Hi all, thank you for the DOODS integration and for the discussion on this thread. I was trying to implement DOODS with this model: https://tfhub.dev/google/aiy/vision/classifier/birds_V1/1
I have a .pb model file and a .csv labels file (each line formatted as id, description). I change the extension to .txt of this file.
The config.yaml file is:
The models directory contains:
models/birds_classifier_v1_1.pb
models/birds_classifier_v1_1.txt
I started docker image with: docker run -it -v /Projects/AI/bird-feeder-classifier/models:/opt/doods/models -v /Projects/AI/bird-feeder-classifier/config.yaml:/opt/doods/config.yaml -p 8080:8080 snowzach/doods:latest
But I get the following error:
ERROR detector/detector.go:73 Could not initialize detector bird_classifier_v1_1: could not load model models/birds_classifier_v1_1.pb {"package": "detector"}
FATAL detector/detector.go:83 No detectors configured {"package": "detector"}
If I used instead of tflite, tensorflow as type in config.yaml I get:
ERROR detector/detector.go:73 Could not initialize detector bird_classifier_v1_1: Could not import model: Invalid GraphDef {"package": "detector"}
FATAL detector/detector.go:83 No detectors configured {"package": "detector"}
What could be the problem? Is possible to DOODS to use this kind of pre trained model?
I also seem to suffer from the detector timeouts with the coral USB TPU.
I run it on USB 3.2, so power should not be an issue. I do forward the USB device as USB3 via proxmox to a VM.
Works fine for a few pictures, but then I end up with:
This isn’t exactly true as I have run across this exact issue. With Windows Server 2019 you can not utilize a GPU through Docker as it runs through a Hyper-V VM which can not access the GPU hardware directly. The only way I can see to utilize the GPU for TensorFlow of any kind on a Windows Server 2019 system is to run the program on bare metal. I would love to know how to do this with DOODs, though from first glance it appears to rely heavily on Docker and this probably isn’t going to be easy.