It probably won’t help in Docker for Windows because the container will still be Linux. These seem to be for running python directly in Windows or macOS.
how do you enable the debug page, i cant find it anywhere on the github. i have the latest installed and if i goto ipaddress:5000/debug/stats i get nothing…everything is working though
Just got the 0.4.0 beta running on my rpi4 and it’s running beautifully, thanks Blake Four low-res streams @ 6fps with average CPU load of 25%, compared to 70%+ before.
For anyone interested, here are the tweaks I made to the DockerFile. This was a bit trial and error so there is probably a more elegant way.
FROM debian:stretch-slim
LABEL maintainer "[email protected]"
ENV DEBIAN_FRONTEND=noninteractive
# Install packages for apt repo
RUN apt -qq update && apt -qq install --no-install-recommends -y \
apt-transport-https ca-certificates \
gnupg wget \
ffmpeg \
python3 \
python3-pip \
python3-dev \
python3-numpy \
# python-prctl
build-essential libcap-dev \
############################################### Added Git & Removed i965-va-driver
git \
# pillow-simd
# zlib1g-dev libjpeg-dev \
# VAAPI drivers for Intel hardware accel
vainfo \
&& echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" > /etc/apt/sources.list.d/coral-edgetpu.list \
############################################### Added Jessie repo, needed for libjasper-dev
&& echo 'deb https://deb.debian.org/debian/ jessie main' > /etc/apt/sources.list.d/jessie.list \
&& wget -q -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt -qq update \
&& echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections \
&& apt -qq install --no-install-recommends -y \
libedgetpu1-max \
python3-edgetpu \
############################################### Added the five lib* below, for opencv
libjasper-dev \
libilmbase-dev \
libopenexr-dev \
libgtk-3-dev \
libatlas-base-dev \
&& rm -rf /var/lib/apt/lists/* \
&& (apt-get autoremove -y; apt-get autoclean -y)
# needs to be installed before others
RUN pip3 install -U wheel setuptools
############################################### Added piwheels for opencv-python-headless
RUN echo '[global]' >> /etc/pip.conf
RUN echo 'index-url = https://www.piwheels.org/simple' >> /etc/pip.conf
RUN pip3 install -U \
opencv-python-headless \
############################################### Removed python-prctl
Flask \
paho-mqtt \
PyYAML \
matplotlib \
scipy
############################################### Added, build python-prctl
RUN git clone http://github.com/seveas/python-prctl \
&& cd python-prctl \
&& python3 setup.py build \
&& python3 setup.py install
# symlink the model and labels
RUN wget -q https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite -O mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --trust-server-names
RUN wget -q https://dl.google.com/coral/canned_models/coco_labels.txt -O coco_labels.txt --trust-server-names
RUN ln -s mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite /frozen_inference_graph.pb
RUN ln -s /coco_labels.txt /label_map.pbtext
WORKDIR /opt/frigate/
ADD frigate frigate/
COPY detect_objects.py .
COPY benchmark.py .
CMD ["python3", "-u", "detect_objects.py"]
I just got this set up today as a replacement for Zoneminder and it is wonderful!
I have one camera that outputs MJPEG instead of h.264. Is there a way for me to use this camera? I have tried setting it up with the default options and (as I expected), it doesn’t work…
It should work. You just need to remove some of the input parameters related to rtsp. I don’t know which ones specifically.
Thanks for sharing! I’m going to give it a go because with my current setup, using the standard camera integration I’m regularly getting hit by this:
@blakeblackshear - FYI, few nights ago car thieves visited my front yard and frigate worked like a charm! I was calling police seconds after they approached my car. Unfortunately they escaped before police arrived, sometimes I wish I was living in the US…
Thanks for developing it and sharing with the community!
Thank you for the confirmation that it should be possible. I’ll work on finding the right combination of commands and post my findings when I do.
The US isn’t necessarily better in some places, i.e. my house.
Quick question @blakeblackshear: how feasible would it be for a future version to enable the use of multiple models with the Coral?
I ask because a use case that would be beneficial for me would be to use object/person detection on several of my cameras but use person AND face detection specifically at my doorbell camera. I’d love to be able to pass a best_face
image to a facial recognition Docker container and send me an alert from Home Assistant. I already do facial recognition with the doorbell camera now, but since the Coral has been so fast and reliable at detecting objects/people, it might work better to detect a face as someone is approaching the door rather than relying on an external trigger (like a motion sensor or doorbell button) to take a snapshot and pass it to my facial recognition container.
Maybe there’s an easy way to combine tensorflow models that I don’t know about? To have “person” and “face” in one model would be great and probably wouldn’t require any changes to frigate.
Would love any thoughts you or anyone else has!
That is already on my mental road map. Combined with object tracking, I can find the best face image associated with that person. The Coral can support multiple models, but the switching cost is high. I will need to be smart about when to use face detection and ultimately face recognition down the line.
First time trying firgate in container using Portainer. Hoping someone can help me with the beta version issue.
“blakeblackshear/frigate:latest” worked without any issue.
“blakeblackshear/frigate:dev” just shows following 2 lines
ffprobe -v panic -show_error -show_streams -of json “rtsp://xxx:xxx@ip:554//h264Preview_01_sub”,
On connect called,
"blakeblackshear/frigate:0.4.0-beta" shows following
ffprobe -v panic -show_error -show_streams -of json “rtsp://xxx:xxx@ip:554//h264Preview_01_sub”
{‘streams’: [{‘index’: 0, ‘codec_name’: ‘h264’, ‘codec_long_name’: ‘H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10’, ‘profile’: ‘High’, ‘codec_type’: ‘video’, ‘codec_time_base’: ‘0/2’, ‘codec_tag_string’: ‘[0][0][0][0]’, ‘codec_tag’: ‘0x0000’, ‘width’: 640, ‘height’: 480, ‘coded_width’: 640, ‘coded_height’: 480, ‘has_b_frames’: 0, ‘sample_aspect_ratio’: ‘0:1’, ‘display_aspect_ratio’: ‘0:1’, ‘pix_fmt’: ‘yuv420p’, ‘level’: 51, ‘chroma_location’: ‘left’, ‘field_order’: ‘progressive’, ‘refs’: 1, ‘is_avc’: ‘false’, ‘nal_length_size’: ‘0’, ‘r_frame_rate’: ‘4/1’, ‘avg_frame_rate’: ‘0/0’, ‘time_base’: ‘1/90000’, ‘start_pts’: 20970, ‘start_time’: ‘0.233000’, ‘bits_per_raw_sample’: ‘8’, ‘disposition’: {‘default’: 0, ‘dub’: 0, ‘original’: 0, ‘comment’: 0, ‘lyrics’: 0, ‘karaoke’: 0, ‘forced’: 0, ‘hearing_impaired’: 0, ‘visual_impaired’: 0, ‘clean_effects’: 0, ‘attached_pic’: 0, ‘timed_thumbnails’: 0}}, {‘index’: 1, ‘codec_name’: ‘aac’, ‘codec_long_name’: ‘AAC (Advanced Audio Coding)’, ‘profile’: ‘LC’, ‘codec_type’: ‘audio’, ‘codec_time_base’: ‘1/16000’, ‘codec_tag_string’: ‘[0][0][0][0]’, ‘codec_tag’: ‘0x0000’, ‘sample_fmt’: ‘fltp’, ‘sample_rate’: ‘16000’, ‘channels’: 1, ‘channel_layout’: ‘mono’, ‘bits_per_sample’: 0, ‘r_frame_rate’: ‘0/0’, ‘avg_frame_rate’: ‘0/0’, ‘time_base’: ‘1/16000’, ‘start_pts’: 0, ‘start_time’: ‘0.000000’, ‘disposition’: {‘default’: 0, ‘dub’: 0, ‘original’: 0, ‘comment’: 0, ‘lyrics’: 0, ‘karaoke’: 0, ‘forced’: 0, ‘hearing_impaired’: 0, ‘visual_impaired’: 0, ‘clean_effects’: 0, ‘attached_pic’: 0, ‘timed_thumbnails’: 0}}]}
Traceback (most recent call last):
File “detect_objects.py”, line 155, in
main()
File “detect_objects.py”, line 87, in main
fps_tracker
File “/opt/frigate/frigate/object_detection.py”, line 20, in init
self.engine = DetectionEngine(PATH_TO_CKPT)
File “/usr/lib/python3/dist-packages/edgetpu/detection/engine.py”, line 73, in init
super().__init__(model_path)
File “/usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py”, line 92, in init
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: No Edge TPU device detected!
On connect called
ffprobe -v panic -show_error -show_streams -of json “rtsp://xxx:xxx@ip:554//h264Preview_01_sub”
I am using the example config from here and only replaced the input stream url
Are you sure that you don’t have multiple frigate containers running?
No, I made sure of that. I did find a work around for the dev image - I have to access the container’s console and run “python3 -u detect_objects.py” then it works. If I make config changes and reboot, it again gets stuck at “On connect called.” but running the object.py from console resolves the issue.
The beta version still shows “RuntimeError: No Edge TPU device detected!” though.
Which coral device do you have?
I have the USB Accelerator. After rebooting host laptop and creating new container seems to fix the issue including the dev image but still having problem after restarting container. There must be something specific to my environment I need to figure out.
The object tracking seems to work really well - Thanks @blakeblackshear for all your work!! I am noticing that my HA’s binary sensor staying in “Detected” state long after there is no car or person in camera view.
It should take about 10 seconds to clear. If someone steps out of the frame and back in, they will keep the same object id.
@blakeblackshear I have 2 cameras, HA is getting the expected mqtt info from first camera “back” and triggering the notification but the 2nd camera “garage” doesn’t seems to be publishing mqtt “off?”. May be i don’t have my configuration properly configured?
web_port: 5000
mqtt:
host: mqtt_ip
topic_prefix: frigate
# client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
user: user # Optional -- Uncomment for use
password: pass # Optional -- Uncomment for use
objects:
track:
- person
- car
- truck
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
cameras:
back:
ffmpeg:
input: rtsp://user:[email protected]:554//h264Preview_01_sub
take_frame: 1
objects:
track:
- person
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
regions:
- size: 300
x_offset: 340
y_offset: 0
- size: 320
x_offset: 320
y_offset: 180
- size: 320
x_offset: 0
y_offset: 180
garage:
ffmpeg:
input: rtsp://user:[email protected]:554//h264Preview_01_sub
take_frame: 1
objects:
track:
- person
filters:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
regions:
- size: 350
x_offset: 0
y_offset: 0
- size: 320
x_offset: 250
y_offset: 100
- size: 300
x_offset: 0
y_offset: 180
binary_sensor:
# frigate binary sensors:
- name: Camera Person
platform: mqtt
state_topic: "frigate/back/person"
device_class: motion
availability_topic: "frigate/available"
- name: Camera Car garage
platform: mqtt
state_topic: "frigate/garage/car"
device_class: motion
availability_topic: "frigate/available"
- name: Camera Person garage
platform: mqtt
state_topic: "frigate/garage/person"
device_class: motion
availability_topic: "frigate/available"
Which version are you running?
I am running the beta version pulled from docker hub - blakeblackshear/frigate:0.4.0-beta
I have seen that happen on older versions, but not lately. The next version will probably fix it. I should be releasing a new beta in the next few weeks.