Face and person detection with Deepstack - local and free!

I created an app for exploring the config parameter for the deepstack object integration, created a thread here

4 Likes

Heyā€¦ could you please export me node red code pleaseā€¦ I only do mine in mode red as wellā€¦

For starters; thanks a lot for this great integration, itā€™s fun and easy to use. You obviously spent a lot of time developing it - to share that effort for free is really kind of you :slight_smile:

I have a question: currently Iā€™m using your tooling to detect whether neighborhood cats came into the house (we keep the door open, and they do that all the time to eat our catā€™s food, bastards). When a cat is detected, I get a telegram notification.

For the next phase, I want to use the images of the various cats to create a new ml model, so I can recognize my cat from the others - that way I only get notifications for other cats.

Is there a way I can store the images in which deepstack detected a cat (or dog, as deepstack keeps calling my cat a dog) as-is? Iā€™d like to keep the single ā€˜deepstack_object_local_file_latest.jpgā€™ setup so I can send that through telegram.

At any rate, thanks for your time, and best of luck :slight_smile:

@SamKr you want save_timestamped_file: True
To recognise YOUR cat that is a classification problem, and you would want to deploy a custom model (possible in deepstack when they open source it)

Almost, Iā€™d like those images, but without the red rectangle (because itā€™d mess with the model).

Iā€™m planning to use Microsoftā€™s ML.NET model builder to get some experience with that as well :slight_smile:

Hey @robmarkcole,

Iā€™m still using your ā€œtflite_server integrationā€ as custom component and it is so far working fine, how ever I want to save all the detected images in my save_file_folder folder with timestamp, currently it is only saving the latest detected picture, could you please help me get this on this component.

My configuration.yaml is as below

image_processing:
  - platform: tflite_server
    ip_address: 192.168.1.120
    port: 5000
    scan_interval: 15
    save_file_folder: /config/www/
    source:
      - entity_id: camera.esp_camera
        name: AI_Cam
"""
Component that will perform object detection via tensorflow-lite-rest-server
"""
import datetime
import io
import json
import logging
import os
from datetime import timedelta
from typing import List, Tuple

import requests
from PIL import Image, ImageDraw
from homeassistant.util.pil import draw_box

import deepstack.core as ds
import homeassistant.helpers.config_validation as cv
import homeassistant.util.dt as dt_util
import voluptuous as vol
from homeassistant.components.image_processing import (
    ATTR_CONFIDENCE,
    CONF_ENTITY_ID,
    CONF_NAME,
    CONF_SOURCE,
    DOMAIN,
    PLATFORM_SCHEMA,
    ImageProcessingEntity,
)
from homeassistant.const import (
    ATTR_ENTITY_ID,
    ATTR_NAME,
    CONF_IP_ADDRESS,
    CONF_PORT,
    HTTP_BAD_REQUEST,
    HTTP_OK,
    HTTP_UNAUTHORIZED,
)
from homeassistant.core import split_entity_id

_LOGGER = logging.getLogger(__name__)

CONF_SAVE_FILE_FOLDER = "save_file_folder"
CONF_TARGET = "target"
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
DEFAULT_PORT = 5000
DEFAULT_TARGET = "person"
RED = (255, 0, 0)
SCAN_INTERVAL = timedelta(days=365)  # NEVER SCAN.


PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
    {
        vol.Required(CONF_IP_ADDRESS): cv.string,
        vol.Optional(CONF_PORT, default=DEFAULT_PORT): cv.port,
        vol.Optional(CONF_TARGET, default=DEFAULT_TARGET): cv.string,
        vol.Optional(CONF_SAVE_FILE_FOLDER): cv.isdir,
    }
)


def get_target(predictions: List, target: str):
    """
    Return only the info for the targets.
    """
    targets = []
    for result in predictions:
        if result["name"] == target:
            targets.append(result)
    return targets


def setup_platform(hass, config, add_devices, discovery_info=None):
    """Set up the classifier."""
    save_file_folder = config.get(CONF_SAVE_FILE_FOLDER)
    if save_file_folder:
        save_file_folder = os.path.join(save_file_folder, "")  # If no trailing / add it

    entities = []
    for camera in config[CONF_SOURCE]:
        object_entity = ObjectDetectEntity(
            config.get(CONF_IP_ADDRESS),
            config.get(CONF_PORT),
            config.get(CONF_TARGET),
            config.get(ATTR_CONFIDENCE),
            save_file_folder,
            camera.get(CONF_ENTITY_ID),
            camera.get(CONF_NAME),
        )
        entities.append(object_entity)
    add_devices(entities)


class ObjectDetectEntity(ImageProcessingEntity):
    """Perform a face classification."""

    def __init__(
        self,
        ip_address,
        port,
        target,
        confidence,
        save_file_folder,
        camera_entity,
        name=None,
    ):
        """Init with the API key and model id."""
        super().__init__()
        self._object_detection_url = f"http://{ip_address}:{port}/v1/object/detection"
        self._target = target
        self._confidence = confidence
        self._camera = camera_entity
        if name:
            self._name = name
        else:
            camera_name = split_entity_id(camera_entity)[1]
            self._name = "tflite_{}".format(camera_name)
        self._state = None
        self._targets = []
        self._last_detection = None

        if save_file_folder:
            self._save_file_folder = save_file_folder

    def process_image(self, image):
        """Process an image."""
        self._image_width, self._image_height = Image.open(
            io.BytesIO(bytearray(image))
        ).size
        self._state = None
        self._targets = []

        payload = {"image": image}
        response = requests.post(self._object_detection_url, files=payload)
        if not response.status_code == HTTP_OK:
            return

        predictions = response.json()
        self._targets = get_target(predictions["objects"], self._target)
        self._state = len(self._targets)
        if hasattr(self, "_save_file_folder") and self._state > 0:
            self.save_image(image, self._targets, self._target, self._save_file_folder)

    @property
    def camera_entity(self):
        """Return camera entity id from process pictures."""
        return self._camera

    @property
    def state(self):
        """Return the state of the entity."""
        return self._state

    @property
    def name(self):
        """Return the name of the sensor."""
        return self._name

    @property
    def unit_of_measurement(self):
        """Return the unit of measurement."""
        target = self._target
        if self._state != None and self._state > 1:
            target += "s"
        return target

    @property
    def device_state_attributes(self):
        """Return device specific state attributes."""
        attr = {}
        if self._targets:
            attr["targets"] = [result["score"] for result in self._targets]
        if self._last_detection:
            attr["last_{}_detection".format(self._target)] = self._last_detection
        return attr

    def save_image(self, image, predictions, target, directory):
        """Save a timestamped image with bounding boxes around targets."""
        img = Image.open(io.BytesIO(bytearray(image))).convert("RGB")
        draw = ImageDraw.Draw(img)

        for prediction in predictions:
            prediction_confidence = ds.format_confidence(prediction["score"])
            if (
                prediction["name"] in target
                and prediction_confidence >= self._confidence
            ):
                draw_box(
                    draw,
                    prediction['box'],
                    self._image_width,
                    self._image_height,
                    text=str(prediction_confidence),
                    color=RED,
                )

        latest_save_path = directory + "{}_latest_{}.jpg".format(self._name, target[0])
        img.save(latest_save_path)

@SamKr I released v3.4 just for you :slight_smile:, let me know how you get on.

@Hitesh_Singh I think I archived that integration, please use deepstack-object

Hey @robmarkcole, thanks for all the hard work with these two great custom components. I managed to get both Object and Face detection running smoothly and absolutely love it. My question to you is, can we install/teach new models?

I specifically have an issue with snakes and would like to have my driveway CCTV cam alert me if any snakes come in from under the gate; is that something that is possible to integrate with the current Deepstack setup?

Looking forward to hearing from you and thanks once again for al the effort with this integration!

Working on itā€¦

2 Likes

@robmarkcole Iā€™m using a Raspberry Pi 4 4GB model, and deepstack-object will have performance issue in it, which is why was asking help on tflite_server integration. as it runs fine on a raspberry. :roll_eyes:

Oh wow, thanks @robmarkcole! You nailed it, works like a charm :slight_smile: Very happy with this!

@Hitesh_Singh use https://github.com/robmarkcole/tensorflow-lite-rest-server runs on rpi and I just added face detection to it too! works with deepstack integrations

2 Likes

Thanks for this, working great. Is it possible have an option to export another image of the frames only as a transparent PNG (same size as the frame size) so I can overlay it over an image ?

Can you give some example python code?

can you show me what you did to resolve this? I am having the same error.

Thanks

Thanks much @robmarkcole

Iā€™m not using it, and the curl results are giving me correct results, How ever in my HA Iā€™m still not getting the counts

My configuration.yaml is as below, what Iā€™m doing wrong !!

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.120
    port: 5000
    save_file_folder: /config/www/
    save_timestamped_file: True
    source:
      - entity_id: camera.esp_camera
        name: AI_Cam_V1

I see here you have mentioned " This API can be used as a drop in replacement for deepstack object detection and deepstack face detection (configuring detect_only: True ) in Home Assistant."

is there any detect_only: True setting that I need to change somewhere !! Could you please help.

Is there a docker version of the tensorflow-lite-rest-server ? Thank you in advance for your answers

1 Like
[{"id":"f47ff4fa.790eb8","type":"tab","label":"Flow 2","disabled":false,"info":""},{"id":"d6db80b9.75fda","type":"server-state-changed","z":"f47ff4fa.790eb8","name":"cat motion","server":"7dbb919d.bdcf9","version":1,"exposeToHomeAssistant":false,"haConfig":[{"property":"name","value":""},{"property":"icon","value":""}],"entityidfilter":"binary_sensor.cat_pir_alarm","entityidfiltertype":"exact","outputinitially":false,"state_type":"str","haltifstate":"on","halt_if_type":"str","halt_if_compare":"is","outputs":2,"output_only_on_state_change":true,"x":80,"y":140,"wires":[["5f32488e.7205e8"],[]]},{"id":"ca1f37d1.ac86a8","type":"delay","z":"f47ff4fa.790eb8","name":"","pauseType":"delay","timeout":"1","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"x":500,"y":140,"wires":[["d3c5ef9e.2ee75"]]},{"id":"d3c5ef9e.2ee75","type":"api-call-service","z":"f47ff4fa.790eb8","name":"Deepstack Cat Scan","server":"7dbb919d.bdcf9","version":1,"debugenabled":false,"service_domain":"image_processing","service":"scan","entityId":"image_processing.deepstack_object_deepstack_cat","data":"","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":280,"y":240,"wires":[["18c4fd29.4d73a3"]]},{"id":"5f32488e.7205e8","type":"api-call-service","z":"f47ff4fa.790eb8","name":"Grab cat Image","server":"7dbb919d.bdcf9","version":1,"debugenabled":false,"service_domain":"shell_command","service":"cat_snapshot","entityId":"","data":"","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":300,"y":140,"wires":[["ca1f37d1.ac86a8"]]},{"id":"18c4fd29.4d73a3","type":"delay","z":"f47ff4fa.790eb8","name":"","pauseType":"delay","timeout":"4","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"x":480,"y":240,"wires":[["86efdf82.aba3d"]]},{"id":"86efdf82.aba3d","type":"api-current-state","z":"f47ff4fa.790eb8","name":"Cat Check","server":"7dbb919d.bdcf9","version":1,"outputs":2,"halt_if":"0","halt_if_type":"num","halt_if_compare":"gt","override_topic":false,"entity_id":"image_processing.deepstack_object_deepstack_cat","state_type":"str","state_location":"","override_payload":"none","entity_location":"","override_data":"none","blockInputOverrides":false,"x":710,"y":240,"wires":[["a4ade475.6865e8"],[]]},{"id":"a4ade475.6865e8","type":"api-call-service","z":"f47ff4fa.790eb8","name":"Telegram cat photo","server":"7dbb919d.bdcf9","version":1,"debugenabled":false,"service_domain":"shell_command","service":"cat_msg","entityId":"","data":"","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":970,"y":240,"wires":[[]]},{"id":"7dbb919d.bdcf9","type":"server","z":"","name":"Home Assistant","legacy":false,"addon":false,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":true,"cacheJson":false}]
camera:
  - platform: local_file
    file_path: /config/www/deepstack_cat.jpg
    name: deepstack_cat
  - platform: local_file
    file_path: /config/www/snapshots/deepstack_object_deepstack_cat_latest.jpg
    name: deepstack_cat_latest

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 5000
#    api_key: mysecretkey
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: false
    # roi_x_min: 0.35
    roi_x_max: 0.8
    #roi_y_min: 0.4
    roi_y_max: 0.8
    targets:
      - cat
    source:
      - entity_id: camera.deepstack_cat

shell_command:
  cat_snapshot: '(curl -s -X GET http://<ip of my hikvision picture stream>/ISAPI/Streaming/channels/2/picture > deepstack_cat.jpg && mv deepstack_cat.jpg www )'
  cat_msg:  '(curl -s -X POST "https://api.telegram.org/bot<your telegram bot token>/sendPhoto" -F chat_id=<cat chat id> -F photo="@/config/www/snapshots/deepstack_object_deepstack_cat_latest.jpg" -F caption="Cat Detected")'

If this helps anybody
setup

2 Likes

@Hitesh_Singh detect_only: True is a required parameter for the deepstack face integration when using tensorflow-lite.

@alpat59 no docker yet but that is a nice idea, should be possible