Face and person detection with Deepstack - local and free!

Just released v0.5 of the deepstack face integration, adding ability to save files (same functionality in deepstack object). Check it out :slight_smile:

3 Likes

Hi,

I run Home Assistant (Hassio / HassOS) on a Pi4 4GB and have just purchased a Coral USB stick. I currently use DOODS (with DOOBS Add On) but really want to use Deepstack or Frigate. Ideally, I would like to run my image_processing on the same device as HA with my Coral stick to help with speed of detection etc


As I understand it, there is no Hassio/HassOS integration for Frigate and I tried to install a workaround with Portainer but as I have never used linux/docker it was too difficult.

I would really like to use your integration on my current RPi4 with the benefits of the Coral stick. Is this possible with very limited coding/docker/linux expertise?

I noticed there was a coral-pi-rest-server but that I think that uses tensorflow-lite which wouldn’t give me any advantages over DOODS which uses the same or something similar.

Sorry for replying to you directly as I saw your post way up in the thread but its pretty confusing for a NOOB because there are SO MANY githubs/ threads of yours/others that talk about very similar things.

Thanks

I am no longer working with the coral stick, it is simply not required unless you are doing high FPS. Furthermore it creates extra barrier to using docker owing to the USB interfacing. I am not a docker expert, but hopefully one arrives with interest in getting my tflite server running as an addon, no additional hardware required!

Hi, not sure if you meant me here. Not a clue where to start sorry, was just hoping you could just replace the sampled image with a transparent PNG so only the boxes are there. No worries if it’s a big job.

@Holdestmade please create a feature request on the repo, I am intrigued by this suggestion

Thanks for the great updates on the plugins. Both are working great together for me.

1 Like

@Gio76 I just deployed a custom classification model, you can see it here. Will do some more work on the server and do a write up soon

That’s amazing @robmarkcole! Thank you very much. Looking forward to be able to enter custom models in the object detection!

Just realeased v0.6 of the deepstack face integration, adding bounding boxes (optional):

Building the automations I have in node-red would require a lot more patience and coding/scripting skills than I have.

Basically I’m having a simple tail on a ftp log that

  1. waits for the camera to login
  2. starts to grab screenshots every 0,5 seconds or so, then after 5 screenshots it gets them only every few seconds (because when motion begins its more probable that whatever you want to found in the images are found in the few seconds of motion) and
  3. while it still grabs screenshots it’s already running tensorflow/deepstack/doods (I’m still testing which is the most reliable one) until it finds something I’m interested in (say, a person) and then
  4. sends the processed image to me through Telegram + stops the image grabbing loop and sets a flow variable that something has been found, and then after:
  5. the camera uploads a motion video, it sends it to me through Telegram if something of interest was found. And last:
  6. It deletes the jpgs and mp4s from taking space. Also if nothing was found those are deleted earlier.

I’m using /run to save the temporary images and videos as to not have unnecessary wear on my SSD, because it’s doing writing a lot (whenever any of my cameras detect motion).

There’s also some other stuff I didn’t bother explaining here because otherwise this post would be even more huge


1 Like

So I just benchmarked my tflite server running on RPi4 against deepstack on Mac pro, and the results are surprising. Processing 15 images, the RPi4 is significantly faster. This is owing to the optimisation of tflite models, but note accuracy will not be as good as deepstack on a Mac.

Platform Speed (sec) Predictions
Mac Pro with deepstack 51.9 91
RPi4 with tflite-server 9.33 159
1 Like

my GPU is finally here and I will be doing some installation and setup to switch from dlib to deepstack. Looking forward to some test this! I just wish it was not a container installation
 as I think it will make the simultaneous GPU passthrough to both HA and deepstack impossible.

when deepstack is open sourced you will be able to run it however you like

1 Like

May I ask what else you would use the GPU for in HA?
I’m planning on using the GPU for deepstack someday
just wondering what else it could be used for.

Not specifically for HA, but I am for example looking at the camera components and have them send livestream instead of snapshots. The main application remains object and facial recognition but I am looking to do so on multiple streams simultaneously.
Further down the road, I may be looking at deep learning for home automation.

Is there some sort of compression happening after adding the bounding boxes? Very difficult to read the percentages
 any way to make this more legible? image

It turns out to be difficult to create a function which will correctly annotate with text of appropriate size given the wide variety of shapes and sizes that images come in. Think I will remove that feature. Suggest you use the deepstack-ui to check what that thing is.

Ok, thanks for the quick response

@robmarkcole I’ve been working on this for a couple of days and seem to have hit a wall. Basic problem I am currently trying to solve - no .jpg ever gets written to the directory by deepstack.

  • Running deepstack in Docker container on Ubuntu 20.04.
  • Running deepstackui in Docker container
  • HASS also runs in a Docker container.
  • Installed HASS-Deepstack-object
  • Used your sample test Curl command, Deepstack returns the proper information.
  • Pass a .jpg with the deepstackui, works fine.

docker-compose.yaml:
deepstack:
container_name: deepstack
restart: unless-stopped
image: deepquestai/deepstack:noavx
ports:
- 5000:5000
environment:
- VISION-DETECTION=True
- VISION-FACE=True
- API-KEY=“sampleapikey”
volumes:
- /srv/docker/deepstack:/datastore

deepstack_ui:
container_name: deepstack_ui
restart: unless-stopped
image: robmarkcole/deepstack-ui:latest
environment:
- DEEPSTACK_IP=x.x.x.x
- DEEPSTACK_PORT=5000
- DEEPSTACK_API_KEY=‘sampleapikey’
- DEEPSTACK_TIMEOUT=20
ports:
- 8501:8501

HASS Configurations:
whitelist_external_dirs:
- /config/www

Within a ‘deepstack.yaml’ file:
image_processing:

  • platform: deepstack_object
    ip_address: x.x.x.x
    port: 5000
    api_key: sampleapikey
    save_file_folder: /config/www/deepstack_person_images/frontyard/
    save_timestamped_file: True
    scan_interval: 10
    confidence: 50
    targets:
    • person
      source:
    • entity_id: camera.front_yard
      name: person_detector_front_yard

I have an automation:

  • alias: image processing
    description: ‘’
    trigger:
    • entity_id: binary_sensor.motion_front_yard
      from: ‘off’
      platform: state
      to: ‘on’
      condition: []
      action:
    • data: {}
      entity_id: image_processing.person_detector_front_yard
      service: image_processing.scan

The automation triggers fine, but I would then expect a .jpg to be added to the folder. No image ever gets added to the folder.

What else am I missing?

An image is only saved is there is a valid detection