I am currently trying to build a Docker container on a Raspberry 3B but have not been successful. Would mind sharing your Dockerfile
Maybe this is a dumb question, but what if I want to use both Deepstack-object and Deepstack-face?
I have tried with this so I hope that is correkt:
docker run -e VISION-DETECTION=True -e VISION-FACE=True -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack
So finally I tried teaching the deepstack engine with a face, copied a photo to the Ubuntu server with docker and deepstack and then went back to Home Assistant and used the teach face service. When I click call Service - nothing happens.
What should I expect? How can I check whether it actually worked?
Are you seeing anyhthing in the docker logs? For example:
[GIN] 2019/08/03 - 13:15:46 | 200 | 1.349301452s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:46 | 200 | 1.888751449s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:56 | 200 | 756.543856ms | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:57 | 200 | 1.381159845s | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2019/08/03 - 13:15:57 | 200 | 1.87328408s | 172.17.0.1 | POST /v1/vision/detection
How about the home assistant logs? Both should show activity if things are working properly.
@robmarkcole Great work on this! It was very straightforward to get object detection running.
Iām having issues getting the state as sensor data, mainly due to my lack of skill with templating.
Iāve tried the following, none of which return the count of person(s) detected:
person_count_driveway:
friendly_name: 'Number of people in Driveway'
value_template: '{{ states.driveway_person_detector.state | int}}'
unit_of_measurement: 'bodycount'
person_count_driveway:
friendly_name: 'Number of people in Driveway'
value_template: '{{ states.driveway_person_detector.state }}'
unit_of_measurement: 'bodycount'
person_count_driveway:
friendly_name: 'Number of people in Driveway'
value_template: '{{ states.driveway_person_detector.state }}'
unit_of_measurement: 'bodycount'
Can anyone shed some light on how to achieve this?
Iād also love to get sensors on the other objects detected as well but am not sure how to pull the json array properly.
I donāt see anything in the log that I can see on the Home Assistant webinterface.
I do get a nice count of faces and with the object detection I do also get the persons and all other āknownā objects.
So for sure I know that there is connection, I just donāt know why it doesnāt take my picture, because it doesnāt recognize me atfer I run the command teach as described.
Is it possible to teach faces directly on the Ubuntu machine where I run Deepstack within Docker?
@bline79
This should work
person_count_driveway:
friendly_name: 'Number of people in Driveway'
value_template: '{{ states.driveway_person_detector.state }}'
unit_of_measurement: 'bodycount'
It is a quirk of home assistant that a unit_of_measurement
is required for a sensor to be plottable as a graph.
@Yoinkz it doesnāt matter which machine you train from. If you are training via the teach
service then the most likely problem is that home assistant cannot access the image files, check they are in a directory you have whitelisted
Iām not having much success here trying to get this as a usable sensor value.
Is it possible to somehow use the object detected as the unit_of_measurement? ie: person, dog, car etc? In addition to not being able to pull in the state info for the target, I canāt seem to get create a sensor for the āall predictionsā objects. It would be great to keep metrics of the objects detected for future automatons.
You could break out each of the metrics using a template sensor, with the caveat that only state changes are every recorded by home assistant. Really this is not a fully fledged camera monitoring solution.
Itās strange you donāt have any sensor data for your template sensor, what does the main image_procssing.deepstack
sensor show? The actual units of unit_of_measurement
donāt have any meaning to HA
@robmarkcole Hey Rob, that could be the case. Iām just not sure I understand what exactly I should whitelist.
My Home Assistant runs on a NUC and my Docker runs on another Machine running Ubuntu.
When I want to teach the DeepStack AI with a photo, should I then place the photo on the Hassio machine or the Deepstack machine and what dir should I then whitelist ?
EDIT: I whitelisted the directory on my Hass installation where I have saved the photo, restarted HA and then tried the teach command again.
I still didnāt get any response and I couldnāt find it in the log, but when I tested it by doing processing the picture, it immediately recognized the photo and provided me with the correct name!
Thanks! This is just awesome!!!
Hereās the full state data:
Iām okay with it just recording state changes, that information would be great to see over time. Just struggling with how to get this data usable in sensor form. Iām just not sure where Iām going wrong here.
This is a great component @robmarkcole. Thanks for your work. I am seeing a strange issue. I have done the following troubleshooting:
- Set up coral REST server on my rpi4 with coral USB accelerator
- Checked the server locally by running curl with sample image. Processing successful.
- Tested from my home assistant server (non-localhost). Processing successful.
- Added RTSP generic camera like so:
camera:
- platform: generic
name: "driveway"
still_image_url: http://192.168.1.252/snap.jpeg
- Added image processing:
image_processing:
- platform: deepstack_object
ip_address: 192.168.1.131
port: 5000
scan_interval: 20000
save_file_folder: /config/www/deepstack_person_images
target: person
confidence: 50
source:
- entity_id: camera.driveway
name: driveway_detector
- Restarted HA
- I can now see my snapshots in HA. about 1 fps.
- Searched thread for similar issue (couldnāt find any).
However, even though everything appears to be working the driveway sensor remains in a detection = unknown state and no images are hitting the REST server as far as I can tell from debug. Not sure where else to look to see why the image processor is not sending the snap.jpeg images from my camera to the REST server.
Any pointers would be greatly appreciated.
This addon is not setup to analyse RTSP (streaming) camera data. You need to analyze snapshots.
Thanks, that worked! Didnāt know the interval was in seconds. Assumed milliseconds.
I was even able to get rid of my RTSP snapshot streams. The component took input from my āplatform: uvcā Unifi cameras and analyzed no problem. Turns out I didnāt need to save RTSP snapshots.
Could anyone perhaps share some of their automations when it comes to notifying either on ios or android?
Currently I try to create an automation when a motion detecter is triggered, then the image processing fires both for
- alias: '[Backyard] Android Notification - Motion w. picture'
trigger:
platform: state
entity_id: binary_sensor.motion_sensor_xxxxxxxxxxxxxxxxxxxxxxxxx
to: 'on'
condition:
- condition: state
entity_id: alarm_control_panel.house
state: 'armed_away'
action:
- delay: 1
- service: image_processing.scan
entity_id: image_processing.backyard_face
- service: image_processing.scan
entity_id: image_processing.backyard_object
- delay: 2
- service_template: '{% if (image_processing.backyard_object.state | int) > 0 %} notify.android {% endif %}'
data_template:
message: "(Known / Unknown) person detected at the front door"
image: 'http://xxx.xxx.xxx.xxx:8123/local/deepstack_person_images/backyard/deepstack_latest_person.jpg?{{now().second}}'
But, as you can see I call both the deepstack_face and deepctack_object. My idea was to sent the notification if the deepstack_object recognizes a person in the picture, but I would like if the face is a known one to be a part of the message.
Does someone have an idea of how to accomplish that?
My own is
sensor:
- platform: template
sensors:
room_persons:
friendly_name: "People in room"
unit_of_measurement: 'Persons'
value_template: "{{states.image_processing.room_main.state}}"
Very exciting announcement on the Deepstack forum by @OlafenwaMoses, this is really a game changer
@robmarkcole Great work on this and your other custom components! I got the email from Deepstack this morning announcing the unlimited instances and it sparked my memory to finally get around and ask if this component would work to detect which car(s) are in a driveway? I have a black sedan and my wife has a red SUV. Could this component be used with the camera feed above my garage to detect which of the (2) vehicles are in the drive way? If not this component, either of your other ones? Thanks!
@cmille34 you can detect the presence of a car with the standard object detection model. To identify if its a red or a black car, you would need to create a custom model, please see:
Thanks! Would you recommended the Deepstack or Machinebox component for this type of use?