Face and person detection with Deepstack - local and free!

@blackgold9 can you create an issue on the repo with all the information in, thanks

Hi Guys,

Iā€™m trying to automate face detection.
From what iā€™m seeing, there are 2 main available solutions :

  • integrated Dlib (using FaceLib and on-premise)
image_processing:
 - platform: dlib_face_identify
   source:
    - entity_id: camera.camera_parking
   faces:
     Thomas: /config/www/images/Thomas.jpg
     Michel: /config/www/images/Michel.png
  • deepstack (online with key API)

Iā€™m disappointed i succeed to get my face recognized with any issue with DLIB, this is working well. Thus, iā€™m able to access at the time time detection though the UI to the information ā€œfaces > name > thomas, total_faces=1ā€. However i spend a few days, iā€™m not able to access to these attributes though templates neither to get history (like detected faces by date and time).

Capture dā€™eĢcran 2020-05-12 aĢ€ 11.56.07

Can you please guide me ? What is the best solution ? Do you succeed to access to these json attributes ? Why do you all move to deepstack, is this not possible to get it working with Dlib??

@robmarkcole look like you are dedicated to image recognition too, iā€™ve seen a lot of your posts and your VM ? Did you give up with Dlib ?

Thanks in advance.
Take care.

Thomas.

deepstack is a local solution. Cannot comment on dlib, other than to say when I looked at it installation was a faff

Are you able with deepstack to :

  • teach faces
  • detect faces, get cropped images on the UI or by notification

I think i will share all my configs on my wiki, i lost time just to find examples and implements them.

Furthermore, iā€™ve seen your comments, iā€™m wondering if it will work on my RaspiberryPi4 or only on my Mac VM. Iā€™ve seen your comment that it was not possible.

Except your VM, do you have pre-working examples or wiki :
configuration.yaml
automation.yaml
Lovelace UI

Thanks again for helping.
Thomas.

The answer to these questions is yes, as documeted in the readme here. There are a few example automations on this thread surely, will take some searching

1 Like

Just curious, How does the rPi + neural compute stick 2 perform? Does it feel sluggish? Maybe hard to compare but say vs a similarly priced celeron/pentium class dedicated mini PC or laptop?

Having some trouble with this setup. Here is hoping someone can see what Iā€™m missing.
I canā€™t get any image of what was captured.
I really just copied my config from the readme for object detection and added my camera.

- platform: deepstack_object
    ip_address: localhost
    port: 5000
    save_file_folder: /config/snapshots/
    save_timestamped_file: True
    # roi_x_min: 0.35
    roi_x_max: 0.8
    #roi_y_min: 0.4
    roi_y_max: 0.8
    targets:
      - person
      - car
      - truck
    source:
      - entity_id: camera.driveway

I have done the whitelist.

whitelist_external_dirs:
    - /config

and if I go to dev tools - services I can manually run the scan and get a result

 image_processing.deepstack_object_driveway	0	ROI person count: 0
ALL person count: 0
ROI car count: 0
ALL car count: 0
ROI truck count: 0
ALL truck count: 1
summary: 
truck: 1

objects: 
- bounding_box:
    height: 0.41
    width: 0.373
    y_min: 0.296
    x_min: 0.625
    y_max: 0.706
    x_max: 0.998
  box_area: 0.153
  centroid:
    x: 0.811
    'y': 0.501
  name: truck
  confidence: 97.279

Also tried running the event listener and listened for deepstack.file_saved and get nothing.

Anyone have a clue what Iā€™m missing?

Thanks

So i have multiple security cams set up tracking motion. When motion is detected it calls image_process, I have 3 targets, person, dog, car. when deepstack.object_detected I call to notify my phone. I am no longer seeign what object it detect in the notification. For example when i trigger my garage camera, I get 3 notifications because i have 3 cars in the garage. The notification is not show what was detected, it is only showing the secion part with the confidence. It once did. I looked thru all the breaking changes, I fear I missed something

1 Like

@danbutter I am having a similar issue.

This is where I think we are both getting hung up:

" Note that by default the component will not automatically scan images, but requires you to call the image_processing.scan service e.g. using an automation triggered by motion."

I am trying to figure out how to have my foscam cameras trigger motion and have the image processing scan an image.

I have been triggering the scan manually and get a result (see my last postā€¦truck with confidence of 97), but I donā€™t get an image saved.
To trigger manually try going to developer tools and follow this screenshot:

image

Yes, I was able to trigger manually.

I am currently trying to have it trigger automatically based on Motion from Blue iris

@danbutter you have nothing inside the ROI, so nothing is saved. This is probably the same for some of the other people here

You can have blue iris send an MQTT to HA every time it detects motion

I have read through a lot of posts and couldnā€™t see an answer but surely someone has this set up. I was trying to set up a binary motion sensor that would visualise a target object motion had been detected. This is easy enough with a template binary sensor for when the image_processing.object_detection is >1 however if the last image processed has an object in it the sensor motion will stay on. How do i get around this or am i approaching it from the wrong way?

Was able to get Deepstack running on Rpi3 buster and NCS2. Does it work with buster.
My Request Post gets stuck with no response back or any hits on pi3 console. If I stop the service on Pi3, program errors. Donā€™t know where to see the debug log. Same code Iā€™m able to get response from another windows box.

There might be an issue with buster, I had the same experience recently

@vijaykbhatia thank you!

I got a blue iris motion sensor setup using eladā€™s custom component for blue iris it is in HACs.

I am now triggering image scan on motion it works pretty well!

I would try setting the reset/ trigger re arm really low to have it keep triggering durng motion

Okay thanks for the reply.
Iā€™ll comment that ROI part out and try when I get home again.
I just copied the config straight from the readme and didnā€™t really realize what it was doing.

I have tried using blue iris like this, but ended up using some HA integration to let the camera itself detect motion. Less work for blue iris to do.
For example I have some dahua cams that work with the amcrest integration and take that info to trigger an image processing scan. This leaves blue iris to just record and this image processing will still happen even if your Blue iris server is down for maintenance.
Just a thought!

Iā€™ve been looking at this for several hours now and this is my current setup.
2 Amcrest cameras are loaded into motion eye (seem to work)
1 Doorbird camera feed is loaded into motion eye (seem to work)

All 3 cameraā€™s are loaded from motioneyeā€™s stream into HA

 -  platform: mjpeg
    mjpeg_url: http://IP:MOTIONEYE_RESTREAM_PORT
    still_image_url: http://IP:8123/api/hassio_ingress/API_TOKEN/picture/NR/current/
    name: CAMERA_NAME

All 3 motioneye streams seem to work in HA

homeassistant/amd64-hassio-supervisor: 222
homeassistant/qemux86-64-homeassistant: 0.109.6 (installed on Ubuntu, which is fully up-to-date)
hassioaddons/motioneye: 0.8.0
Deepstack installation via HACS is v3.1

However itā€™s not working for me and I donā€™t know why :-/
Nothing ever reaches deepstack.
When calling scan I get the following error:

2020-05-18 21:29:53 DEBUG (MainThread) [homeassistant.core] Bus:Handling <Event call_service[L]: domain=image_processing, service=scan, service_data=entity_id=image_processing.objects_in_CAMERA_NAME>
2020-05-18 21:29:53 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.objects_in_CAMERA_NAME fails
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 279, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 470, in async_device_update
    await self.async_update()
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 132, in async_update
    await self.async_process_image(image.content)
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 112, in async_process_image
    return await self.hass.async_add_job(self.process_image, image)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_object/image_processing.py", line 239, in process_image
    io.BytesIO(bytearray(image))
  File "/usr/local/lib/python3.7/site-packages/PIL/Image.py", line 2896, in open
    "cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7fe9d5a17950>