Does the hass face component supports google coral? If not, are you familiar with component that does support it (detection / recognition)?
Thx
Does the hass face component supports google coral? If not, are you familiar with component that does support it (detection / recognition)?
Thx
Not yet, possible in time
Is it possible to use the face component with a local file camera?
Yes you can use any camera supported by HA
Iām having trouble with the notification couter, for example if 3 people is detected on the image, I receive the same notification 3 times, iām using this:
any help with this?
thanks
Just found that the deepstack url for raspberrypi installation are broken, anyone has a copy of the deepstack.deb file (the one for raspberrypi and ncs)?
Thanks Rob, for an awesome integrationā¦ iāve been using it for a while now, and itās time to clean up my config and get everything working as it is supposed toā¦
That said, Iāve got the following two lines in my config:
save_file_folder: /tmp/detector/deepstack/
save_timestamped_file: True
and I am getting filenames like this:
F0DYQO~W.JPG
FC1DYT~S.JPG
FPVETY~G.JPG
FPVETY~L.JPG
Any idea where i may have gone wrong?
Hmm the component could not produce those filenames
I am getting those too and it is because :
save_file_folder: /config/snapshots/
save_timestamped_file: True
In windows the names look a little crazy, but if you look inside home assistant its looks normal:
Am I doing something wrong here?
How do I choose which items trigger notifications?
Itās finding other items but only creating a person detected file.
Also, when I walk in the room, itās not triggering the automation which should send me the image.
I donāt have the portion of the log below, but when I leave the room, the summary changes from person=1 to person=0
When I enter the room, it changes back to person=1 . but the automation isnāt emailing me anything
My current detection shows a change from
summary=
dining table=1
person=1
laptop=1
chair=2
to
summary=
dining table=2
person=1
laptop=1
chair=3
and the only file that gets created is
deepstack_detector_latest_person.jpg
line of my debug log
2020-02-11 22:03:03 DEBUG (MainThread) [homeassistant.components.websocket_api.http.connection.12334115914] Sending {āidā: 2, ātypeā: āeventā, āeventā: <Event state_changed[L]: entity_id=image_processing.deepstack_detector, old_state=<state image_processing.deepstack_detector=0; last_person_detection=2020-02-11_22:02:20, summary=dining table=1, person=1, laptop=1, chair=2, unit_of_measurement=person, friendly_name=deepstack_detector @ 2020-02-11T22:02:41.917945-05:00>, new_state=<state image_processing.deepstack_detector=1; last_person_detection=2020-02-11_22:03:02, summary=dining table=2, person=1, laptop=1, chair=3, unit_of_measurement=person, friendly_name=deepstack_detector @ 2020-02-11T22:03:03.131442-05:00>>}
configuration.yaml
image_processing:
- platform: deepstack_object
ip_address: 10.0.0.20
port: 5555
api_key: < my key >
scan_interval: 20 # Optional, in seconds
save_file_folder: /config/www/
save_timestamped_file: True
source:
- entity_id: camera.frontdoor
name: deepstack_detector
automations.yaml
- id: '1159403824611'
alias: New detection alert
trigger:
- platform: event
event_type: image_processing.object_detected
action:
- service: script.send_detected_image
scripts.yaml
send_detected_image:
sequence:
- service: notify.gmail
data_template:
title: 'Something/Someone was detected'
message: "What/Who is it?"
data:
file: "{{ trigger.event.data.file }}"
I think the issue with my automation not firing is the last line
file: "{{ trigger.event.data.file }}"
How do I get the filename of the last file created by the automation?
Iām getting the same over samba in Windows. Its fine in Hass/linux though
It appears that the files with funny names are all being produced on windows platforms only? @cooloo @Ekstrom @xeryax ?
@Dayve67 as per docs:
The box coordinates and the box center (centroid) can be used to determine whether an object falls within a defined region-of-interest (ROI). This can be useful to include/exclude objects by their location in the image.
Basically use the box in a condition on your automation.
@surge919 it probably is a yaml formatting error somewhere
I have not had any false notifications since 2 months but today I got one! I think I just have to raise my confidence level to 85% from the standard. I do wonder however if there are any updates done the the DeepStack docker image that can increase the detection? Is it trained and final or are there regular updates? Thanks for a very good add on to my smart home!
Deepstack is using Yolo3 which is a standard model. If you want to use your own model checkout the tensorflow integration
Hi there, Iām sorry if this has already been answered somewhere but I canāt find anything about. Currently Iām running 2 Docker containers built from the same image, one for face recognition on port 5000 and the other for object recognition on custom port 5005.
So far is working well with 2 cams, but I was wondering whatās the better approach, if any, regarding calculation power between the one you suggest (activating both obj and face on one instance) and mine.
Oh Iām on a i5 NUC 7th gen with 16Gb RAM and M.2 SSD.
Thanks
Hi, iām having trouble this component to trigger an automation when person is detected, the component itself looks like to be working fine because i can see the picture created with the person enclosed in red, here is my automation:
- id: '1234'
alias: person detection pushbullet notify
description: ''
trigger:
- event_data:
object: person
event_type: image_processing.object_detected
platform: event
condition: []
action:
- delay: '5'
- data:
data:
file: images/deepstack_person_detector_front_latest_person.jpg
message: Person detected on the front door
target:
- channel/mychannel
service: notify.pushbullet
if i trigger the automation manually i receive the pushbullet message with the picture, but i canāt get the automation to work.
anyone has this issue?
Hey Guys,
Iām thinking about using Deepstack, but need to clarify some ideas.
I have a dell 3060 (win 10 machine) running Blue Iris.
Can I install Deepstack also on this machine, and then after the image is analyzed by Deepstack to send it to Home Assistant? For notification, etc.
Is there a guide for this? My HA is on a Raspbery Pi 3 b+ hass.io.
Thanks Robin.
If I remember correctly this was the original topic that got me interested in Deepstack. I will give it a deeper look. But my question is more specific, just because Iām completly noob on linux.
Should I make the effort to instal linux on a virtual machine on my DELL pc and instal deepstack on it or windows will also work fine?
This doesnāt mention Blue Iris, but it shouldnāt affect in performance right? the camera feed is handle by BI and then by Deepstack.
Thanks again