Local realtime person detection for RTSP cameras

Not sure. It hasn’t shipped yet. I ordered from element14.

So, so far so good. I had it up and running on one 1080p camera without a mask for 24 hours which is longer than the previous versions ever held… At first I thought it wasn’t working because I didn’t see the “Invalid CE Couloumb Code” errors that I had become used to seeing. But after I waited a few minutes, the stream did come up.

It did restart the stream at least once that I noticed but it didn’t seem to cause problems. I’m now running it on 4 1080p cameras with masks (8 zones total). It’s going well so far. Will report back in a couple days if it’s still going. Thanks for the patch!

Hi Blake

Thanks so much for creating and sharing this amazing project. I have a very basic ‘person detector’ script running on a pi using a combination of camera motion detection and tensorflow but get way too many false alarms and missed events due to the constraints, so I cannot wait to replace it with this!

My Coral stick has not arrived yet so I’ve configured v0.0.1 to test in the meantime. Works perfectly.

One question if I may. I’m looking at the mqtt branch and see it uses the binary_sensor to drive telegram notifications. I do not use Home Assistant so this is not something I am familiar with.

I’ve integrated it into OpenHab and I’m raising notifications when the motion topic changes from Off to On, but this ofcourse potentially causes a large number of notifications in a short amount of time.

tl;dr
I’m curious how you control the frequency of notifications - whether this is something that is handled by HASS (i.e. something I’ll need to put some logic around within OpenHab), or something that will be configurable within frigate itself.

Many thanks!

When I was using v0.1, I had a sensor set up based on the person detection confidence so that went it went over a threshold (I think with that version I used a threshold of around 18 if I recall correctly), and took notifications based on that. IMO motion detection is pretty worthless for sending notifications as it gets tripped all the time from trees, shadows, animals, etc…

Because it is summing all the person scores over the past 2 seconds or so. I really need to make it adjustable to accommodate different framerates. You would have to rate limit in Hass or openhab somehow if you are getting too many notifications from it turning on and off.

I just put a provision in the automation making the notification to not run more than once every 5 minutes per camera… It did the trick… Don’t know how to do it in openhab though.

Thanks
Will try the frame rate limit option once my coral arrives so I can upgrade, and see how it goes

So, just to make it final, I’ve been up and running for 2 full days with all 4 cameras and masks, so it looks like your verification of the ffmpeg termination paid off. It’s really running strong. Thanks for all your help!

EDIT, just in the interest of full disclosure, it does give me some queue full messages a few times every hour or so for a few seconds and then stops giving them, and needs to restart the ffmpg a couple times a day, but really nothing which you’d notice in day to day use.

My rtsp url is: rtsp://192.168.1.32:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream

Have I configured it correctly in the config file:

rtsp:
      user: viewer
      host: 192.168.1.32
      port: 554
      # values that begin with a "$" will be replaced with environment variable
      password: $RTSP_PASSWORD
      path: /user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream

Have you opened the RTSP stream successfully in VLC to test that it’s working properly? What kind of camera are you using? The standard format for RTSP URLs is usually more like:

RTSP://username:password@host:port/path

your /user=admin_password looks suspicious to me. Try opening that url in VLC and see if you get the video feed.

The camera is an ESCAM QD530. The RTSP stream works within VLC as well as when using the steam and camera integrations within Home Assistant.

By the way, if you are interested, here’s what I get when I turn on hardware acceleration on my synology DS918+ with an Intel Celeron J3455 (most recent log item on top). The container exits after this.:

2019-07-24 21:55:43	stderr	TypeError: list indices must be integers or slices, not str
2019-07-24 21:55:43	stderr	    self.rtsp_url = get_rtsp_url(self.config['rtsp'])
2019-07-24 21:55:43	stderr	  File "/opt/frigate/frigate/video.py", line 122, in __init__
2019-07-24 21:55:43	stderr	    cameras[name] = Camera(name, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
2019-07-24 21:55:43	stderr	  File "detect_objects.py", line 53, in main
2019-07-24 21:55:43	stderr	    main()
2019-07-24 21:55:43	stderr	  File "detect_objects.py", line 99, in <module>
2019-07-24 21:55:41	stderr	Traceback (most recent call last):
2019-07-24 21:55:41	stdout	On connect called 

This is the config I used:

  ffmpeg_hwaccel_args:
    - -hwaccel
    - vaapi
    - -hwaccel_device
    - /dev/dri/renderD128
    - -hwaccel_output_format
    - yuv420p
1 Like

If that URL works, I’d try leaving out the RTSP user and password settings as it appears your camera doesn’t specify them in the usual way.

Just an update from earlier:

I’ve been running the frigate:0.2.0-beta docker container for 9 days on an odroid-h2 compute device with really no issues. Here’s a current snapshot of the docker container from netdata:

It’s working so well, in fact, that I completely disabled all motion detection inside Blue Iris and instead have node-red trigger the camera(s) in blue iris via an MQTT message. As of right now, node-red will pick up on blue iris triggering the camera (via another mqtt message from Blue Iris) and use the built-in tensorflow inspection in Home Assistant. But I’ll probably do away with that (redundant) part as well as time goes on.

Here’s what the node-red flow currently looks like:

3 Likes

I’m struggling with this version (hikvision and pi4), tons of queue full messages, I had to cut my regions in half, and eventually it bombs out with endless queue full messages. Glad to see this worked well for you though!

Are you on the most recent release from the ffmpeg subprocess release?? That’s when he fixed the hikvision issue I was having. I don’t think he made these fixes in the 0.2 beta or main latest trees.

If any of you are running your workloads in kubernetes, I created a helm chart for frigate and it should also be accessible over on the Helm Hub.

In my particular case, the coral USB device is plugged into one of the kubernetes nodes and that node is labeled with tpu: google-coral. The chart is configured to schedule the pod only to nodes matching that label. The specific configuration is here for any interested.

1 Like

Hi

I’ve received my Coral stick now and have got the mqtt_camera branch running on my laptop with 4 cameras perfectly.

However, I’m having problems building the image on my rpi 4 :confused:

ffmpeg-subprocess & mqtt_camera, fail at:

Step 6/24 : RUN apt-get -qq update && apt-get -qq install --no-install-recommends -y  python3  ffmpeg  build-essential  cmake  unzip  pkg-config  libjpeg-dev  libpng-dev  libtiff-dev  libavcodec-dev  libavformat-dev  libswscale-dev  libv4l-dev  libxvidcore-dev  libx264-dev  libgtk-3-dev  libatlas-base-dev  gfortran  python3-dev  libusb-1.0-0  python3-pip  python3-pil  python3-numpy  libc++1  libc++abi1  libunwind8  libgcc1  libva-drm2 libva2 i965-va-driver vainfo  && rm -rf /var/lib/apt/lists/*
 ---> Running in b8841ce8dd4c
E: Package 'i965-va-driver' has no installation candidate

v0.2.0-beta, fails at:

Step 11/20 : RUN tar xzf edgetpu_api.tar.gz   && cd python-tflite-source   && cp -p libedgetpu/libedgetpu_x86_64.so /lib/x86_64-linux-gnu/libedgetpu.so   && cp edgetpu/swig/compiled_so/_edgetpu_cpp_wrapper_x86_64.so edgetpu/swig/_edgetpu_cpp_wrapper.so   && cp edgetpu/swig/compiled_so/edgetpu_cpp_wrapper.py edgetpu/swig/   && python3 setup.py develop --user
 ---> Running in f1e74438cdc5
cp: cannot create regular file '/lib/x86_64-linux-gnu/libedgetpu.so': No such file or directory
The command '/bin/sh -c tar xzf edgetpu_api.tar.gz   && cd python-tflite-source   && cp -p libedgetpu/libedgetpu_x86_64.so /lib/x86_64-linux-gnu/libedgetpu.so   && cp edgetpu/swig/compiled_so/_edgetpu_cpp_wrapper_x86_64.so edgetpu/swig/_edgetpu_cpp_wrapper.so   && cp edgetpu/swig/compiled_so/edgetpu_cpp_wrapper.py edgetpu/swig/   && python3 setup.py develop --user' returned a non-zero code: 1

Docker version

Client: Docker Engine - Community
 Version:           19.03.0
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        aeac949
 Built:             Wed Jul 17 18:25:36 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       aeac949
  Built:            Wed Jul 17 18:19:37 2019
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

I could very well be making a noob obvious error somewhere…

I am getting this now too on the mqtt_camera and ffmpeg branches. I didn’t get it at all yesterday, I’m trying to figure out what has changed.

For me it starts as soon as the container starts up - endless ‘queue full’ messages and maxed out cpu. If I stop and start the container enough times I can eventually get it to start up OK, but it might take about 10 attempts. I still get the occasional queue full messages, but not 100s per second.

I have four Reolink cameras, one 350px region on each and no masks. I am using a 640x360 stream at 8fps, with take_frame set to 5

Bear with me on exact commands here (since I did this a week or two ago), but on step 11, where your docker build is failing:

Step 11/20 : RUN tar xzf edgetpu_api.tar.gz   && cd python-tflite-source   && cp -p libedgetpu/libedgetpu_x86_64.so /lib/x86_64-linux-gnu/libedgetpu.so   && cp edgetpu/swig/compiled_so/_edgetpu_cpp_wrapper_x86_64.so edgetpu/swig/_edgetpu_cpp_wrapper.so   && cp edgetpu/swig/compiled_so/edgetpu_cpp_wrapper.py edgetpu/swig/   && python3 setup.py develop --user
 ---> Running in f1e74438cdc5

Change the Dockerfile and put this in here instead:

RUN tar xzf edgetpu_api.tar.gz \
  && cd python-tflite-source \
  && cp -p libedgetpu/libedgetpu_arm32.so /lib/arm-linux-gnueabihf/libedgetpu.so \
  && cp edgetpu/swig/compiled_so/_edgetpu_cpp_wrapper_arm32.so edgetpu/swig/_edgetpu_cpp_wrapper.so \
  && cp edgetpu/swig/compiled_so/edgetpu_cpp_wrapper.py edgetpu/swig/ \
  && python3 setup.py develop --user

There may have been one more step I did to make it happy, but hey at least start there!