I was using this to seperate people from cars, etc. And give me different automation options. I have not been following for a while and looks like i have some reading to do.
person_in_driveway:
friendly_name: Person in Drivway
value_template: >
{% set m = state_attr('image_processing.front_door', 'ROI person count') %}
{{ m | float >= 1 }}
Obviously this does not work anymore. I realized some of my automations were failing. Anyone have a quick way to seperate counts based on target?
more then 1200 comments… What it is?!
Please answer me the simplest question: where is the managing UI? Just show me how to see the pictures of teached persons, how to change one of them if you have made and mistake in teaching?
I’m using Deepstack for a while now, but I keep getting low res pictures for my snapshots
I’m using 4k cameras and getting horrible quality for the snapshots.
I’m using BI and running the process_imaging on that (so no direct rtsp to HA) could that be a problem? (very few pictures are with full res)
I am running deepstack in a docker on Proxmox and HA in a VM. Deepstack works fine when running manual (using curl and returning the faces), but I am getting a 403 error when calling the image.processing.image command. See below the deepstack log, with the first the manual command and the second the call service command.
What am I missing? The state of image.processing stays unknown.
And when I try to register a face using (with detect_only=false):
service: image_processing.deepstack_teach_face
data:
name: Noor
file_path: /config/www/learn/noor/noor2.jpg
I get an unknown error response while deepstack shows a register:
[GIN] 2021/07/18 - 23:12:47 | 403 | 32.008µs | 192.168.2.200 | POST /v1/vision/face/register
While I’m here, is anyone actually getting any actual decent face recognition from Deepstack? I have been sending images from my doorbell to it, and basically, it’s a waste of time - most people get detected as me (even females that look absolutely nothing like me), so it’s a bit of a pity!
As far as I understand the activation of Deepstack is not needed anymore if you are using it on CPU or GPU. But I’m using it on NCS2 and now I can’t activate it. I think that the activation process generates some file which is stored locally and I want to find out where that file is or where the code responsible for generating the file is. Is there anyone who knows something about this?
Hi, thanks robmarkcole for such a great custom component!
I have my install of deepscan working well with HA, but I was wondering if it’s possible to have differing detection areas for different targets on the same camera?
My use case is a driveway camera. I currently have the right 45% of image excluded as there is usually a car parked there, but there is a path in front that people will walk down to go to my front door. So essentially, I would like to have 100% of the image scanned for people, but only the left 55% for cars.
I installed HA in docker on machine 1 and got deepstack with deepstackui working on machine 2.
I also installed hacs component for deepstack_object with below configuration.yaml:
image_processing:
target: person
confidence: 70
source: #- entity_id: camera.amcrest_mediaprofile_channel1_mainstream
entity_id: camera.garage_cam_wyze
now when i ‘call service’ for image processing and look at state of my deepstack entity its still marked as ‘unknown’, it does not recognize any face, and directory for image file is empty. On machine 2 on deepstack docker log i see a call is made from machine 1 with below entry in logs:
Looks like I managed to work my issue out. By adding a name: field after the source camera, i am able to create two disparate entities using the same camera which outputs two different named images.