Can anyone share how they are using the events to count objects. Like how many cars in the garage. Count up down or stay the same based upon changes when events run.
Thanks
Can anyone share how they are using the events to count objects. Like how many cars in the garage. Count up down or stay the same based upon changes when events run.
Thanks
yes.
you need a trigger like a motion sensor.
Thanks @TobiasGJ, I’ve put a trigger with motion sensor. When motion is detected image processing happens and saves the image. I’ve put an automation to notify on my android with the latest jpg. Trigger type is state and entity as image_processing.deepstack_object, I’ve put From 0 To 1 for state change but this automation never triggers even when the state is changed to 1. Would you know what have i done wrong here?
hmm
maybe you should post your automation here. don’t really understand what you mean.
motion sensor are typically on/off not 1/0.
Hello all, managed to get face detection installed and working using the default docker run command on GitHub, but was curious about adding Object detection to the same container. Would I issue the command below to accomplish this?
docker run -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack
hi Robin @robmarkcole,
I have been using the deepstack no-avx version along with your HA integration component for some time now and recently migrated to deepstack cpu version latest to improve the speed of the service since no-avx version was slow.
on this new cpu latest version from docker im able to successfully run the vision/detection however /vision/face e.g. face detection is not working.
Every time request is stuck and no sign of the request is seen in docker log as well. after long period call would end with timeout error. sometimes it wont even error out without any response.
below are details
Would you be able to provide any pointers as how it can be resolved? I’ve also anyway posted the same query on deepstack community forum as well.
Thanks
I suggest make calls directly to the API via command line, and see what happens (doing a log trace on the container at the same time) e.g
curl -X POST -F [email protected] ‘http://192.168.1.26:5000/v1/vision/detection’
First prove that is working
Yes /v1/vision/detection is working fine which is to detect the objects. I have enabled both VISION-FACE and VISION-DETECTION with MODE=High in docker container…
i tried through a simple python script to find out if it’s related to the Robin’s component or underlying problem with container… and it turned out to be deepstack container problem.
import requests
image_data = open("family.jpg","rb").read()
response = requests.post("http://192.168.1.165:32770/v1/vision/face",files={"image":image_data}).json()
print(response)
it never returns anything for long…while if i use no-avx image of deepstack which is 2 years old without any recent optimization that at least returns the predictions in around 20-30 secs.
i’m facing this issue with cpu-latest version of docker image along with other cpu tags like 2021.01 and beta-8 etc.
Sorry - i misread you post - does face work by itself (without “Object”)?
Just on the version you are using …
The latest I see up on docker is 2020.12 ??
no luck while running with only VISION-FACE.
and yes, sorry for typo, below are the versions i tried so far…
deepstack:cpu-x6-beta - face detection not working
deepstack:cpu-2020.12 - face detection not working
deepstack:noavx-3.4 - working with both face and object detection
deepstack:latest - face detection not working
Just a couple quick queries on face training.
Firstly, if I use the tiny face grabs created in “save_faces_folder”, deepstack returns a 400 error and states that there was no face detected. Can I ask what the recommended approach is for training - I’d like to be able to use the images from my camera to train deepstack over time as they come in - and was thinking I could use the faces folder for this. Seems I need to use the full image - but not sure how that would work if there were multiple faces in the frame.
Secondly, is there any clarity on registering multiple face images. Rob, I note you replied to the following post on the deepstack forums but there hasn’t been a reply.
From what I can gather, the only way to register multiple faces is to register them all together, which is not ideal if you want to train deepstack over time as more images come through. You’d have to keep a separate database or directory of faces, and when you add a new one in, upload the full list again.
Thanks
Curious about this, I did something similar but get 1 message for each object detected. Is there any way to change this. A little too many notifications for me? lol
Hey, could anybody help me out please.
I have install Deepstack Face Detection through docker running Debian 10.
I am having issues running the teach face service. I have followed the example in the documentation on how to call the service changing the name of the file of course, but nothing happens. I either get no error in the logs or I have started now receiving this error below.
Has anyone had this error, or can anybody please help me?
Thanks
2021-02-09 16:56:22 ERROR (MainThread) [homeassistant.core] Error executing service: <ServiceCall image_processing.deepstack_teach_face (c:d2d780c173291febacf655ba4374cb51): name=Adele, file_path=/config/www/jack.jpg>
{
"name": "Adele",
"file_path": "/config/www/adele.jpeg"
}
The error references jack and code references Adele?
I just release a new version of this component.
The new release has some exiting features that is usefull for people who are using deepstack as a surveillance detection for objects (for example people) that are entering you property and you want to view the object detections in the images as slideshow.
The new version allows you to pause and browse (prev. and next) a slideshow that is composed of snapshots that are captured by the deepstack component.
I hope someone can help me. I have setup the docker container in portainer using Rob the hook up video. With the object detection it works great but when i change VISION-DETECTION = True in VISION-FACE = True it does not work. I have the HASS-Deepstack-face integration and the configuration. I think it is something with the container but i don’t no what i’m missing.
hey all (amazing thread…)
i seem to be having trouble connecting HA to deepstack.
i have deepstack running on windows (tried both cpu version, and the regular windows version).
though i get the “deeostack 3.4” page when trying to access the AI from another computer via http, and i can see login attemts from other computers on the network, i cant make HA talk to it.
i get this log (which i dont realy understand) when i call the service.
Logger: homeassistant.helpers.entity
Source: custom_components/deepstack_object/image_processing.py:318
First occurred: 7:21:53 PM (1 occurrences)
Last logged: 7:21:53 PM
Update for image_processing.deepstack_object_unifi_g3 fails
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/helpers/entity.py”, line 278, in async_update_ha_state
await self.async_device_update()
File “/usr/src/homeassistant/homeassistant/helpers/entity.py”, line 474, in async_device_update
raise exc
File “/usr/src/homeassistant/homeassistant/components/image_processing/init.py”, line 132, in async_update
await self.async_process_image(image.content)
File “/usr/src/homeassistant/homeassistant/components/image_processing/init.py”, line 112, in async_process_image
return await self.hass.async_add_executor_job(self.process_image, image)
File “/usr/local/lib/python3.8/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/config/custom_components/deepstack_object/image_processing.py”, line 318, in process_image
self._image = Image.open(io.BytesIO(bytearray(image)))
File “/usr/local/lib/python3.8/site-packages/PIL/Image.py”, line 2958, in open
raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f033b3c6db0>
i tried different computers (only windows so far), different ports, config files, firewall off, etc’.
feels like im missing something basic
any points in the right direction will be gr8
hey all
need to clarify something. my logic says that if i see this pic from another computer on the network,
it means that deepstack is running and listening and the ip is: 10.0.0.2, and the port is: 5000
will that be a reasonable assumption to feel in the configuration.yaml flie?
thanks
That’s a correct assumption.
hey thanks for answering
well in that case i have bigger problems…
when i call the service in home assistant this is the log i get:
Logger: homeassistant.helpers.entity
Source: custom_components/deepstack_object/image_processing.py:318
First occurred: 8:29:23 PM (1 occurrences)
Last logged: 8:29:23 PM
Update for image_processing.deepstack_object_ipcam_dome fails
Traceback (most recent call last):
File “/usr/src/homeassistant/homeassistant/helpers/entity.py”, line 278, in async_update_ha_state
await self.async_device_update()
File “/usr/src/homeassistant/homeassistant/helpers/entity.py”, line 474, in async_device_update
raise exc
File “/usr/src/homeassistant/homeassistant/components/image_processing/init.py”, line 132, in async_update
await self.async_process_image(image.content)
File “/usr/src/homeassistant/homeassistant/components/image_processing/init.py”, line 112, in async_process_image
return await self.hass.async_add_executor_job(self.process_image, image)
File “/usr/local/lib/python3.8/concurrent/futures/thread.py”, line 57, in run
result = self.fn(*self.args, **self.kwargs)
File “/config/custom_components/deepstack_object/image_processing.py”, line 318, in process_image
self._image = Image.open(io.BytesIO(bytearray(image)))
File “/usr/local/lib/python3.8/site-packages/PIL/Image.py”, line 2958, in open
raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7faae8d30270>
any ideas?