Local realtime person detection for RTSP cameras

Looking forward to the update as well! What do you reckon is the ETA?

Can you add the possibility to set the user and group IDs (PUID and PGID) of the docker container to run at, and set it as environment variables? Similarly as to all the linuxserver.io containers have it set up. Makes permissions for the /config files much more manageable.
In those containers, all you do is:

environment:
  PUID=101
  PGID=102

@blakeblackshear

Wow! This project looks awesome. Will this run on the jetson nano in the future too?

Please create an issue on github

1 Like

I donā€™t know much about the Jetson, but if it runs tensorflow lite, then it should be possible.

1 Like

New release candidate!

This release was a major overhaul to use multiple processes and improve performance.

Breaking Changes:

  • You must pass the shm-size parameter to your container via the command line or docker-compose. Check the updated README
  • New required fps parameter for the each camera. See example
  • Debug endpoint changed. See updated README for homeassistant sensor configuration.

Changes:

  • Lightweight motion detection incorporated to minimize unnecessary Coral use
  • Regions are no longer necessary
  • Use of a Coral is now optional
  • Separate process per camera

Docker image is available with docker pull blakeblackshear/frigate:0.5.0-rc1

7 Likes

Iā€™m seeing the following in the docker log for frigate? camera is called ā€˜drivewayā€™
Is this the same issue as mr-onion ?

17:34:48 Last frame for driveway is more than 30 seconds oldā€¦ stdout
17:34:48 Waiting for process to exit gracefullyā€¦ stdout
17:34:48 Process for driveway is not alive. Starting againā€¦ stdout
17:34:48 Camera_process started for driveway: 1765 stdout
17:34:48 Starting process for driveway: 1765

It happens about every minute.

Iā€™m using the default values from your examples

--shm-size=512m

fps: 5

It this due to not having a Coral device or something else?

Iā€™m on 0.5.0-rc1

Can you try running the container with the docker run command in the README and passing python3.7 -u benchmark.py at the end? That will benchmark your detection speeds with your CPU.

Here are the graphs I setup for monitoring

5 Likes

I added the python part to the end and the logs showed the following before it shutdown

18:18:02 No EdgeTPU detected. Falling back to CPU. stdout
18:19:24 Average inference time: 826.52ms

I need to do some more testing with CPU I think. My guess is that frigate is skipping frames to try and maintain 5 FPS, but your CPU can only handle ~1 FPS. It is probably skipping frames for so long that frigate thinks the process has died and restarts the capture process.

Also try adding logging for ffmpeg by setting your global params: https://github.com/blakeblackshear/frigate/blob/cd057370e155ff420abb1e75a02b259f9e30482d/config/config.example.yml#L16-L20

Use info instead of panic

1 Like

I lowered the fps setting to 1 and Iā€™m still seeing those messages.

If I set it to info instead of panic, arenā€™t I simply masking the errors?

Itā€™s the opposite. The default hides almost all ffmpeg output. Info will be more verbose.

1 Like

I also had the same error @surge919 mentioned in version 0.5.0-rc1. It freezes completely once a person walks in the camera frame, and thus no person ever gets detected. Previous version worked fine with the Coral USB stick.

It would be really helpful to have an easier way to tell if the Coral stick is actually being used or not. Perhaps a small static debug webpage that can show you some statistics? Would be a helpful debug tool for future bug reports as well.

Iā€™m going to revert to the latest version for now, as it currently doesnā€™t work for me.

Do you have anything in your logs? What do you think is missing from the /debug/stats endpoint?

I am having the following errorā€¦

 Could not find codec parameters for stream 1 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options

I am currently on v0.5.0-rc1. The RTSP stream works fine with v0.4 and v0.3.

Thanks

I donā€™t have a Coral device yet so I get this in the logs

No EdgeTPU detected. Falling back to CPU.

also, I turned on the extra debugging @blakeblackshear mentioned and there is a bunch of extra info. You may want to enable it and check whatā€™s in there.

update the config.yml with the following

  global_args:
    - -loglevel
    - info

@blakeblackshear . Now that I enabled the extra logging, is there anything I can send you?

You should see some errors if ffmpeg is failing to process the video feed. Hard to know exactly what to look for.

The previous latest stable version had a way where you could see the image recognition in real-time, with confidence intervals and all - are you supposed to see that on this new version as well or is that now gone?

Are there ways to still draw out what areas of the camera image you want to use for person detection?

You can see detections in the video feed in this version too. There is no longer any way to define regions. Motion detection is used to determine where to look for objects.