Local realtime person detection for RTSP cameras

Ok. I pushed a new version. If ffprobe fails, it tries to use OpenCV. As a last resort, you can now specify the width and height in the config for each camera and it will skip the auto detection all together.

That did it, thank you!

By the way, I did need to bump the time the healthcheck probes wait to start checking, because the ffprobe thing seems to take ~10s per camera to time out. So in my case with 5 cameras, itā€™s trying for about 50 seconds before it starts serving and processing.

If you specify the resolution in your config for the cameras, it will skip that step altogether.

Iā€™m on the second 4.0 beta from the 17th

Since 4.0 beta w/ this config I am getting person matches smaller than my min size.

objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 3500
      max_area: 100000
      threshold: 0.5

cameras:
  front:
    ffmpeg:
    take_frame: 1
    regions:
      - size: 460
        x_offset: 60
        y_offset: 220

Most recently I had person w/ a 1827 size.

Did you see the person in the mjpeg stream or in the snapshot?

It was in the Snapshot

I havenā€™t been able to reproduce that with a similar config, and I donā€™t see anything obvious. Does it always happen? Try increasing the min_area to a much higher value so you can test to see if it filters you out when walking in the frame. There could be an edge case somewhere that makes it possible for that to happen occasionally.

@hasshoolio + @blakeblackshear

This sounds exactly like the issue I was having earlier this last week.

I had to add the objects config both globally and per camera in order for the filtering to work with min_area. Granted the objects in the camera would overwrite the global, it still wouldnā€™t work unless I had it declared (with all filters) in both spots.

So the original config posted should work if it looks like this:

objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 3500
      max_area: 100000
      threshold: 0.5

cameras:
  front:
    ffmpeg:
    take_frame: 1
    objects:
      track:
        - person
        - car
        - truck
      filters:
        person:
          min_area: 3500
          max_area: 100000
          threshold: 0.5
    regions:
      - size: 460
        x_offset: 60
        y_offset: 220

It should work yep!
However I bought one and I canā€™t get it to work yet :laughing:
But I suspect my hardware (I bought a pcie extension card to put in my server and I suspect its not playing nice ā€¦)
I will try with another one soonā€¦

Wow, I donā€™t think there are words to describe how perfect work is this. :heart_eyes: Thank you so much, @blakeblackshear!

Guys, is there a way to use ESPhome cameras as a direct source or do i have to stream it somewhere and convert it to a rtsp?

What camera are you using with esphome? If the camera can send the raw frame data, then frigate wonā€™t even need to decode. Would love to see if we can get away without encoding and decoding the video feed. Thatā€™s a waste of computing power.

Started working with Frigate oh so very long ago (Dec 18th :wink: ) and ran into major distractions: holidays, HA system SSD filled up (it was only 128GB), and Rhasspy came along. :open_mouth: Then the downstairs router died. :face_with_symbols_over_mouth: Just one darned thing after another, and over the holidays no less. With Rhasspy working fine, the router replaced, and a 500GB SSD installed, it was finally time to revisit Frigate.
I think it took about five or ten minutes to get setup, another few minutes to switch to a lower resolution substream, and now itā€™s running great. This is a testament to all the great work from Mr. Blackshear. So very happy to donate a few more coffees to the cause. :star_struck:

1 Like

Iā€™m not sure about passing a raw data. Actually i donā€™t think itā€™s possible for now(there wasnā€™t any reason for that i suppose) but probably could be after some coding. Not my cup of tea. :nerd_face: (iā€™m more into Arduino and had no time to investigate much ESPhome/Tasmota/WLED/etc. :smiley: )

They probably work out of the box by pointing frigate at the mjpeg stream. I ordered a few to see if I can send raw image data.

1 Like

Have you ever put any thought into making Frigate a plug-in for shinobi to do object detection?

I havenā€™t, but I can look into it.

Hey, thanks for adding obects!! Im currently using 2 corals but is there a way to utilize a rtx 2080 for this?

Look and see if tensorflow lite can run on that GPU. If so, it should be possible.

Iā€™ve an issue and donā€™t know where to start to resolve!

Iā€™ve a camera which Iā€™ve used since version 0.2.0 and it has worked great, however it isnā€™t working with the latest version.

frigate starts, gets an image, but shows a FPS of 0 and restarts the stream as no image is detected - it looks like for some reason the stream is not being processed after a small number of frames.

Iā€™ve downgraded to 0.2.0 and put the config back, and it works perfectly again - where should I be looking, what tweaks should I be trying?

Thanks

Ah - Iā€™ve made progress by changing the ffmpeg optiions back to what they were for 0.2.0 - essentially removing +discardcorrupt - -vsync and - drop.

So far seems to be working well.

As an aside, is there a recommended way of getting the ā€˜bestā€™ ffmpeg options for a camera?