Local realtime person detection for RTSP cameras

Yep, I had a chat with Google engineer as well but he is also telling me to ask VMware…

The error you pasted above seems to happen for empty/unused VM pci slots in my case.
Could you double check with lspci command which pci id is your coral is installed?

Mine fails with this in dmesg:
[5.589867] apex 0000:03:00.0: Page table init timed out
[5.590320] apex 0000:03:00.0: MSI-X table init timed out
[5.603491] apex: probe of 0000:03:00.0 failed with error -110

Actually, now that you mention it, I had this same error too. The id’s didn’t match for me either (0000:0b:00.0 vs 0000:00:15.5)

Here’s my full dmesg:

https://pastebin.com/84jYmAay

Doesn’t look like there’s much interest from Google to fix the issue, and this is beyond my expertise so we’re going to have to wait until more people get their hands on this module. Also, VM passthrough for a device like this shouldn’t be such a crazy idea…

I’m sure VMware is just going to refer me back to Google :smile:

This is my config:

web_port: 5000

mqtt:
  host: <redacted>
  topic_prefix: frigate
  user: <redacted>
  password: <redacted>

objects:
  track:
    - person
    - car
    - cat
    - dog
  filters:
    person:
      min_area: 1000
      max_area: 100000
      threshold: 0.5
    car:
      max_area: 20000
      threshold: 0.5          


cameras:
# # Cat camera
  cat_cam:
    ffmpeg:
      input: <redacted>
      global_args: 
        - -hide_banner
        - -loglevel
        - panic       
      hwaccel_args:
        - -hwaccel
        - vaapi
        - -hwaccel_device
        - /dev/dri/renderD128
        - -hwaccel_output_format
        - yuv420p      
      input_args: 
        - -avoid_negative_ts
        - make_zero
        - -fflags
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -vsync
        - drop
        - -rtsp_transport
        - udp
        - -stimeout
        - '10000000'
        - -use_wallclock_as_timestamps
        - '1'
      output_args:
        - -vf
        - mpdecimate
        - -f
        - rawvideo
        - -pix_fmt
        - rgb24
 
    take_frame: 5 #30fps
 
    objects:
      track:
        - person
        - cat
        - dog
      filters:
        person:
          min_area: 5000
          max_area: 1000000
          threshold: 0.5

    regions:
      - size: 600
        x_offset: 130
        y_offset: 120
        objects:
          person:
            min_area: 5000
            max_area: 1000000
            threshold: 0.5
      - size: 400
        x_offset: 730
        y_offset: 225
        objects:
          person:
            min_area: 5000
            max_area: 1000000
            threshold: 0.5
1 Like

How much load will it be to make a rest sensor from the debug endpoint?

Would be nice to see the Coral FPS and inference speed directly in HA…

Very little. The debug endpoint is efficient since it is just JSON data. Viewing the camera feed generates load because it is creating a jpg image every second.

OK cool, because I did it and it is quite cool:

3 Likes

That looks nice, I can’t seem to get the right format for the rest sensor, I’ve got

  - platform: rest
    resource: http://x.x.x.x:5000/debug/stats
    name: frigate
    value_template: '{{ value_json['coral']['fps'] }}'

Which is returning an error when performing a Configuration validation.

Care to share your sensor setup?

Thanks

I see the Coral Python libraries now support Windows and Mac. Any chance this could be used to run on Docker for Windows?

That way I might be able to use my old laptop with USB 3 (the drivers only works in Windows).

Sure. I used the bottom-most example on the rest sensor page – the one that uses a rest sensor and 3 template sensors for 3 bedrooms. The benefit is it only does a single rest call to the debug endpoint and then uses the template sensors to extract the data.

I have 3 cameras and wanted to see the FPS for each of those, as well as some Coral stats.

Here’s how my /debug/stats looks

{"cars":{"camera_fps":0.7666666666666667,"dynamic_regions_per_sec":0.7,"finished_frame_queue":0,"frame_queue":0,"refined_frame_queue":0,"regions_in_process":{},"resize_queue":0,"skipped_regions_per_sec":0.0},"cat_cam":{"camera_fps":0.7333333333333333,"dynamic_regions_per_sec":0.0,"finished_frame_queue":0,"frame_queue":0,"refined_frame_queue":0,"regions_in_process":{},"resize_queue":0,"skipped_regions_per_sec":0.0},"coral":{"fps":6.5,"inference_speed":33.05057938500792,"queue_length":0},"front":{"camera_fps":1.8166666666666667,"dynamic_regions_per_sec":0.0,"finished_frame_queue":0,"frame_queue":0,"refined_frame_queue":0,"regions_in_process":{},"resize_queue":0,"skipped_regions_per_sec":0.0}}

And here’s the sensors: config (in sensors.yaml in my setup)

  - platform: rest
    name: Frigate Debug
    resource: http://<ip>:5000/debug/stats
    json_attributes:
      - cars
      - cat_cam
      - front
      - coral
    value_template: 'OK'  
  - platform: template
    sensors:
      frigate_cars_fps: 
        value_template: '{{ states.sensor.frigate_debug.attributes["cars"]["camera_fps"] }}'
        unit_of_measurement: 'FPS'
      frigate_front_fps: 
        value_template: '{{ states.sensor.frigate_debug.attributes["front"]["camera_fps"] }}'
        unit_of_measurement: 'FPS'
      frigate_cat_cam_fps: 
        value_template: '{{ states.sensor.frigate_debug.attributes["cat_cam"]["camera_fps"] }}'
        unit_of_measurement: 'FPS'
      frigate_coral_fps: 
        value_template: '{{ states.sensor.frigate_debug.attributes["coral"]["fps"] }}'
        unit_of_measurement: 'FPS'
      frigate_coral_inference:
        value_template: '{{ states.sensor.frigate_debug.attributes["coral"]["inference_speed"] }}' 
        unit_of_measurement: 'ms'   
      frigate_coral_queue_length:
        value_template: '{{ states.sensor.frigate_debug.attributes["coral"]["queue_length"] }}' 
        unit_of_measurement: 'frames'   

Great, I’ve changed that to work for me, always great to see examples you can copy, thanks.

It probably won’t help in Docker for Windows because the container will still be Linux. These seem to be for running python directly in Windows or macOS.

how do you enable the debug page, i cant find it anywhere on the github. i have the latest installed and if i goto ipaddress:5000/debug/stats i get nothing…everything is working though

Just got the 0.4.0 beta running on my rpi4 and it’s running beautifully, thanks Blake :slight_smile: Four low-res streams @ 6fps with average CPU load of 25%, compared to 70%+ before.

For anyone interested, here are the tweaks I made to the DockerFile. This was a bit trial and error so there is probably a more elegant way.

FROM debian:stretch-slim
LABEL maintainer "[email protected]"

ENV DEBIAN_FRONTEND=noninteractive
# Install packages for apt repo
RUN apt -qq update && apt -qq install --no-install-recommends -y \
    apt-transport-https ca-certificates \
    gnupg wget \
    ffmpeg \
    python3 \
    python3-pip \
    python3-dev \
    python3-numpy \
    # python-prctl
    build-essential libcap-dev \
############################################### Added Git & Removed i965-va-driver
    git \ 
    # pillow-simd
    # zlib1g-dev libjpeg-dev \
    # VAAPI drivers for Intel hardware accel
    vainfo \
    && echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" > /etc/apt/sources.list.d/coral-edgetpu.list \
############################################### Added Jessie repo, needed for libjasper-dev
    && echo 'deb https://deb.debian.org/debian/ jessie main' > /etc/apt/sources.list.d/jessie.list \
    && wget -q -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
    && apt -qq update \
    && echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections \
    && apt -qq install --no-install-recommends -y \
    libedgetpu1-max \
    python3-edgetpu \
############################################### Added the five lib* below, for opencv
    libjasper-dev \ 
    libilmbase-dev \ 
    libopenexr-dev \ 
    libgtk-3-dev \
    libatlas-base-dev \
    && rm -rf /var/lib/apt/lists/* \
    && (apt-get autoremove -y; apt-get autoclean -y)

# needs to be installed before others
RUN pip3 install -U wheel setuptools

############################################### Added piwheels for opencv-python-headless
RUN echo '[global]' >> /etc/pip.conf
RUN echo 'index-url = https://www.piwheels.org/simple' >> /etc/pip.conf


RUN pip3 install -U \
    opencv-python-headless \
############################################### Removed python-prctl
    Flask \
    paho-mqtt \
    PyYAML \
    matplotlib \
    scipy

############################################### Added, build python-prctl
RUN git clone http://github.com/seveas/python-prctl \
 && cd python-prctl \
 && python3 setup.py build \
 && python3 setup.py install

# symlink the model and labels
RUN wget -q https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite -O mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --trust-server-names
RUN wget -q https://dl.google.com/coral/canned_models/coco_labels.txt -O coco_labels.txt --trust-server-names
RUN ln -s mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite /frozen_inference_graph.pb
RUN ln -s /coco_labels.txt /label_map.pbtext

WORKDIR /opt/frigate/
ADD frigate frigate/
COPY detect_objects.py .
COPY benchmark.py .

CMD ["python3", "-u", "detect_objects.py"]

I just got this set up today as a replacement for Zoneminder and it is wonderful!

I have one camera that outputs MJPEG instead of h.264. Is there a way for me to use this camera? I have tried setting it up with the default options and (as I expected), it doesn’t work…

It should work. You just need to remove some of the input parameters related to rtsp. I don’t know which ones specifically.

Thanks for sharing! I’m going to give it a go because with my current setup, using the standard camera integration I’m regularly getting hit by this:

@blakeblackshear - FYI, few nights ago car thieves visited my front yard and frigate worked like a charm! I was calling police seconds after they approached my car. Unfortunately they escaped before police arrived, sometimes I wish I was living in the US… :wink:
Thanks for developing it and sharing with the community! :smile:

3 Likes

Thank you for the confirmation that it should be possible. I’ll work on finding the right combination of commands and post my findings when I do.

The US isn’t necessarily better in some places, i.e. my house.

Quick question @blakeblackshear: how feasible would it be for a future version to enable the use of multiple models with the Coral?

I ask because a use case that would be beneficial for me would be to use object/person detection on several of my cameras but use person AND face detection specifically at my doorbell camera. I’d love to be able to pass a best_face image to a facial recognition Docker container and send me an alert from Home Assistant. I already do facial recognition with the doorbell camera now, but since the Coral has been so fast and reliable at detecting objects/people, it might work better to detect a face as someone is approaching the door rather than relying on an external trigger (like a motion sensor or doorbell button) to take a snapshot and pass it to my facial recognition container.

Maybe there’s an easy way to combine tensorflow models that I don’t know about? To have “person” and “face” in one model would be great and probably wouldn’t require any changes to frigate.

Would love any thoughts you or anyone else has!

That is already on my mental road map. Combined with object tracking, I can find the best face image associated with that person. The Coral can support multiple models, but the switching cost is high. I will need to be smart about when to use face detection and ultimately face recognition down the line.

3 Likes