Yeah, I am not sure what you can do there. I suspect the door frame is confusing the Tensorflow model. The only other thing might be to train a custom model here. I have never done it myself but in theory you could train the model to understand person with a door in the way there. I hope to get time someday to actually learn how to do this myself.
As to your suggestion I moved to Node-RED to see if I can make something awesome happen in it.
So far I’ve created a sequence and it seems to me the idea works but I’m trying to see if an attribute contains a value (attribute summary contains person) to continue with the sequence:
This is what I got from the debug log:
So far I’ve tried to fetch the attribute person with this line:
msg.data.attributes.summary person
Though it keeps giving me this error:
“Error: Invalid property expression: unexpected ’ ’ at position 23”
Now before I try to debug something I’ve never done before, maybe anyone can point out what kind of node or parameters/properties I am missing or defined wrong?
`
My tests during the day are also failing, which fits well with your theory that the door frame is confusing the model.
I’m already proud that I managed to integrate DOODS and get it working (even if you seem to have managed to hide most of the complexity, thanks again!), I guess training the model will be out of my league
Perhaps in a year or two it will become easy enough that even I can do it.
Hi All , Love the project
2 question
1 is it possible to put a Coral Mini PCIe M.2 into a USB converter
2 has anyone done it
I would use the debug option a bunch to see what’s coming out of the previous block. Sometimes I’ve found it easier to use the code node to just process each message by hand. Sometimes the home assistant nodes are goofy. What is coming out of msg.
might be a sub object of that message. It takes a little fiddling to get used to node red but once you do it’s magic.
I know that people have used the mini PCIe EdgeTPU with DOODS but I have not heard of anyone doing it with a USB converter. I am pretty skeptical that the EdgeTPU would work in one of those adapters. I think they are mostly designed for network cards that support some sort of USB mode. I could be wrong though.
Thanks for the reply
@snowzach, forgive me if I’m asking in the wrong place, as I’m not using the HA component right now, but rather another instance of DOODS in a Docker container on a different rPi. I’m using motion detection in motionEye to detect and save image to disk, and Node-Red to watch the folder and send found images to DOODS with the following JSON:
"detector_name": "default",
"data": msg.payload.toString('base64'),
"detect": { "car": 60,
"truck": 60
}
}
msg.headers = {
"Content-Type": "application/json"
}
return msg;
That’s working great, and if there are detections returned the file is saved off for review, sensors are triggered, automations run, etc.
What I’m missing is seeing DOODS ‘work’: the bounding boxes and labels and confidence.
In the HA component, there is the file_out: "/config/www/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
directive that gives me what I want.
From snooping at the msg
that DOODS is returning in Node-Red, it looks like there is an image in there. Is it the equivalent of file_out:
, and if so, how can I extract it?
Thanks for amazing work with DOODS… it’s my first foray into ML and AI stuff, and for an old guy, it’s truly incredible stuff.
I’m really starting the get the hang of it, the debug block tells me the information on the second picture from my previous post.
I do not know the appropriate syntax or current state block to see if “person” exists in msg.data.attributes.summary.
According to DOODs and the debug log it should tell me whether “car” or “person” or “truck” exists:
msg.data.attributes.summary.person gives me a wrong example because if person = 0, the attribute does not exist.
DOODS doesn’t return any image data. It just returns the coordinates and labels detected. Inside of the home assistant component is where draws the boxes and annotations. This is outside of the scope of what DOODS does (at least for now)
That’s why the code node can be easier. You can test to see if it exists first.
I’m kinda missing that kind of code, not sure what to put there (I’ve tried everything in a Current State Node).
All kinds of “is”, “is not”, and I even tried a Switch Node which has an option for “contains” person in the msg.data.attribute.summary however it’s not accepting it. It’s probably due to lack of understanding of code on my side…
is it possible to turn off the camera feeds in to doods or have doods running as a automation
It is, first you define doods and set the scan_interval to really high. Then in an automation you call doods to scan.
So I managed to succeed in building the sequence in Node-RED and it actually works. I feel amazing.
So to help newcomers like me:
-
Basic: define doods configuration according (and credits) to @snowzach
-
First define an events state node:
-
Second create a call service node to scan (using doods):
-
Third, create a current state node to save the complete state of the scan into the data (of the payload):
-
Almost last but not least, create a switch node to check if the key “person” or “car” or whatever label you want exists at the property of msg.data.attribute.summary:
-
Create yourself a notify node or whatever finishes your sequence!
Result of the complete sequence:
Thanks everyone.
Thanks I will give it a go
Got it working Im using the motion sense in the camera to trigger doods
Can someone tell me how to force the state on the image Processing to 0 please . I can do it in the developer tools but not in a script
image_processing.doods_landing 1 matches:
person:
- score: 68.75
box:
- 0.43539935
- 0.09401226
- 0.9943637
- 0.8369306
summary:
person: 1
total_matches: 1
process_time: 0.6271175090223551
friendly_name: Doods landing
State to 0, you mean you want the result to be 0 matches? It requires you to increase the percentage of the confidence of person in your example to 70%.
Hi Step,
I use motion detection to trigger an automation which send the video to Doods 5 times , if Doods detects a person on the last pass then the states stays at 1 until the automation is trigged again, I would like to be able l to set it to 0 after each run.