Normaly 8 secs,between 8 - 10 secs
Does anyone have an automation example of using object detection with multiple cameras? I want to use the image_processing.object_detected
event but be able to tell which camera is was.
Iām also not sure if I need to specify different save_file_folder
for each camera/detector
Hey Dusing,
This is what I am doing:
image_processing:
- platform: deepstack_object
ip_address: 10.0.0.5
port: 5000
api_key: !secret deepstackapi
scan_interval: 10
confidence: 70
save_file_folder: /config/www/deepstack_person_images/frontyard
source:
- entity_id: camera.front_yard
name: person_detector_front_yard
- platform: deepstack_object
ip_address: 10.0.0.5
port: 5000
api_key: !secret deepstackapi
scan_interval: 10
confidence: 70
save_file_folder: /config/www/deepstack_person_images/frontdoor
source:
- entity_id: camera.front_door
name: person_detector_front_door
- platform: deepstack_object
ip_address: 10.0.0.5
port: 5000
api_key: !secret deepstackapi
scan_interval: 10
confidence: 70
save_file_folder: /config/www/deepstack_person_images/basement
source:
- entity_id: camera.basement
name: person_detector_basement
- platform: deepstack_object
ip_address: 10.0.0.5
port: 5000
api_key: !secret deepstackapi
scan_interval: 10
confidence: 70
save_file_folder: /config/www/deepstack_person_images/backyard
source:
- entity_id: camera.back_yard
name: person_detector_back_yard
Thank you, do you have automations for when objects are detected?
I did it in node red
Hi
Can you share your node-red flow?
Thx
I have one of these for each of my cameras.
[{"id":"74434c3.ab3d1b4","type":"server-state-changed","z":"71c38478.f9382c","name":"","server":"5573076a.27abd8","version":1,"entityidfilter":"image_processing.person_detector_front_door","entityidfiltertype":"exact","outputinitially":false,"state_type":"str","haltifstate":"0","halt_if_type":"str","halt_if_compare":"is","outputs":2,"output_only_on_state_change":true,"x":225,"y":762.7500219345093,"wires":[[],["fdeb88ef.88ee78"]]},{"id":"fdeb88ef.88ee78","type":"api-current-state","z":"71c38478.f9382c","name":"unknown","server":"5573076a.27abd8","version":1,"outputs":2,"halt_if":"unknown","halt_if_type":"str","halt_if_compare":"is","override_topic":false,"entity_id":"image_processing.person_detector_front_door","state_type":"str","state_location":"payload","override_payload":"msg","entity_location":"data","override_data":"msg","blockInputOverrides":false,"x":528.5,"y":762.7500219345093,"wires":[[],["7f16dc93.fe7de4"]]},{"id":"7f16dc93.fe7de4","type":"api-current-state","z":"71c38478.f9382c","name":"","server":"5573076a.27abd8","version":1,"outputs":2,"halt_if":"true","halt_if_type":"str","halt_if_compare":"is","override_topic":false,"entity_id":"input_text.frontdoortimer","state_type":"str","state_location":"payload","override_payload":"msg","entity_location":"data","override_data":"msg","blockInputOverrides":false,"x":779.7500228881836,"y":765.000020980835,"wires":[[],["5e10fad7.98f824"]]},{"id":"5e10fad7.98f824","type":"function","z":"71c38478.f9382c","name":"Add Timestamp","func":"var topic;\nmsg.topic = \"Person Detected at Front Door at: \" + new Date().toLocaleString();\nreturn msg;","outputs":1,"noerr":0,"x":1043.5000343322754,"y":766.750020980835,"wires":[["bcc3933b.ce884"]]},{"id":"bcc3933b.ce884","type":"file in","z":"71c38478.f9382c","name":"","filename":"/config/www/deepstack_person_images/frontdoor/deepstack_latest_person.jpg","format":"","chunk":false,"sendError":false,"x":291.5477294921875,"y":820.0833559036255,"wires":[["344fb993.491ac6","41b2e147.8240f"]]},{"id":"41b2e147.8240f","type":"api-call-service","z":"71c38478.f9382c","name":"Set false","server":"5573076a.27abd8","version":1,"debugenabled":false,"service_domain":"input_text","service":"set_value","entityId":" input_text.frontdoortimer","data":"{ \"value\": \"true\"}","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":586.2500534057617,"y":874.2500228881836,"wires":[[]]},{"id":"344fb993.491ac6","type":"change","z":"71c38478.f9382c","name":"","rules":[{"t":"set","p":"attachments","pt":"msg","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":714.3570175170898,"y":822.0358333587646,"wires":[["49b61079.4d2ea"]]},{"id":"49b61079.4d2ea","type":"e-mail","z":"71c38478.f9382c","server":"smtp.gmail.com","port":"465","secure":true,"tls":true,"name":"","dname":"","x":908.0001106262207,"y":818.7501864433289,"wires":[]},{"id":"5573076a.27abd8","type":"server","z":"","name":"Home Assistant"}]```
Hi all
I have just released v2.3 which makes a couple of small changes described in the release notes, including adding the config variable save_timestamped_file
which makes the saving of timestamped images opt in. As usual any feedback appreciated.
Cheers
Hi Rob - so your latest release doesnāt address this issue yet?
Would I be able to use the save_timestamped_file instead and then somehow call the latest āyoungestā file somehow?
Nice. But Iām afraid that 1000 calls wouldnāt be enough.
I like the google_vision_latest_{target}.jpg idea though.
Iāll probably just wait until you have the tile to get the other one fixed, as I am a huge fan of your deepstack component and the way it works.
Hi,
Anyone that can help with my automation config? The box trigger config causes the automation to not run but I cannot find why. I have tried with no parenthesis and with brackets but no successā¦
- id: '11200928246229'
alias: Object detection automation
trigger:
- platform: event
event_type: image_processing.object_detected
event_data:
object: person
entity_id: image_processing.person_detector_vardagsrum
box: (0.0, 0.0, 1.0, 1.0)
condition: []
action:
- data_template:
title: "New object detection"
message: "{{ trigger.event.data.entity_id }} with confidence {{ trigger.event.data.confidence }}"
data:
image: 'https://....../local/deepstack_person_images/vardagsrum/person_detector_vardagsrum_latest_person.jpg'
service: notify.phone
@Yoinkz I think you misunderstood, I was not suggesting you use the other integration but rather wrap your automation in the way show in the link (temporarily turning off the automation). UPDATE: Please replace your imapge_processing.py file with this one, restart and see if this fixes the issue. Note that I had to increase the timeout
interval but hopefully this fixes the issue
@alexando your automation will only run if the person has that box size, which will never happen
@robmarkcole Ok, thanks. I am trying to filter off detection for a part of the camera picture (e.g. the top 10%), Is this possible?
Hi @alexando
yes that is going to be possible. Do you want to:
- Exclude objects that have a bounding box that overlaps with the excluded region?
- Exclude objects which have their center in the excluded region?
Hi @robmarkcole
I want to exclude objects that fully overlaps a bounding box. Example, if an object is only in a defined area of the picture it is disregarded but as soon as the object goes outside that area, automation is activated.
Alternatively, a defined area where an automation is activated as soon as an object enters (partly or fully).
OK that is the simplest case - you just need to check that the bottom of the bounding box is within the excluded region. I suggest you can do this simply with a python_script that consumes the bounding box data
Oh sorry about that. Then I did misunderstand it. But even though it fires with the same picture, I rather do want to get something than nothing.
I replaced the file and restart HA. Will do some testing and let you know.
Thanks!
Hi everybody,
Iām a little bit lost with all information.
I want to have a camera in front of my door to detect people who are present in home.
I canāt use the deepstack_teach_face
serviceā¦ when I call the service it tells me it canāt !
thank you !!
Hy Rob,
I noticed right now that version 2.3 was available. Should I overwrite the file again after this update?