Local realtime person detection for RTSP cameras

Awesome, thanks!

I had the same delay when passing the device through in proxmox via USB ID, I passed the entire USB card to the VM and now I’m down to ~12ms.

1 Like

Hi
My HA is running on a NUC (supervised)
can someone recommend which cameras to look for? I need to buy 2 outdoor cameras and I would like to be sure that will be compatible with HA first.
Thanks

Hi,

I have two camera streams setup with the following settings. I find that it only detect objects on the top half of the image and hardly ever picks out anything from the bottom half.

I was under the impression that the new 0.5.1 release will check for objects anywhere on the screen. Can you still setup regions or restrict detection to a specific area?

front_left:
    ffmpeg:
      input: rtsp://192.168.2.2:554/Streaming/Channels/202

    take_frame: 3
    fps: 15
    snapshots:
      show_timestamp: false

    objects:
      track:
        - person
        - car
      filters:
        person:
          min_area: 3000
          max_area: 100000
          threshold: 0.5
  front_right:
    ffmpeg:
      input: rtsp://192.168.2.1:554/Streaming/Channels/102

    take_frame: 3
    fps: 15
    snapshots:
      show_timestamp: false

    objects:
      track:
        - person
      filters:
        person:
          min_area: 5000
          max_area: 100000
          threshold: 0.5

Hi Dan,

thanks that fixet it!!

1 Like

You can use a mask to limit where objects are detected.

Hi there!

@blakeblackshear gpu support was merged or not? i cant find any reference to cuda/nvidia in https://github.com/blakeblackshear/frigate

It wasn’t. The PR was against an old branch. I commented on the closed PR asking to reopen against master.

@blakeblackshear okay
the other question is: how to use http stream input?
Stream #0:0[0xd3]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuvj420p(pc, progressive), 1280x720, 15 tbr, 90k tbn, 180k tbc
Can you guess ffmpeg options for this input?:slight_smile:

nevermind, works with -vf mpdecimate

I have hikvisions, they work fine.

1 Like

@freshcoast

Part of the guide for that Issue was using Docker-compose which i don’t think is possible with HassOS so the Portainer add method doesn’t work… i don’t think :thinking:

Portainer is just an alternative frontend to docker, analogous to docker-compose. You would just configure Portainer to use the same configuration.

@blakeblackshear thanks for your hard work in making this a reality, I been tying to get this up and running in Synology Docker. I tried various settings but keeing getting this error on boot up

i can confirm using VLC i am able to open the rtsp stream. Not sure where i can go from here :frowning: Any tips would be awesome

current error is as follows with the KeyError: ‘rtsp’

Traceback (most recent call last):
On connect called
  File "detect_objects.py", line 90, in <module>
    main()
  File "detect_objects.py", line 44, in main
    cameras[name] = Camera(name, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
  File "/opt/frigate/frigate/video.py", line 117, in __init__
    self.rtsp_url = get_rtsp_url(self.config['rtsp'])
KeyError: 'rtsp'

my config is as follows

web_port: 6000

mqtt:
  host: mqtthostname
  topic_prefix: homeassistant
  user: 'user'
  password: 'password'

objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 5000
      max_area: 100000
      threshold: 0.5

cameras:
  back:
    ffmpeg:

    input: rtsp://user:password@ipofcamera:554/ch1/0
#    regions:
#      - size: 720
#        x_offset: 0
#        y_offset: 0
#      - size: 12800
#        x_offset: 0
#        y_offset: 0


    height: 1280
    width: 720

    take_frame: 1

    fps: 10

    snapshots:
      show_timestamp: True

Is this release (0.5.1) better with Wyze cams?

Looks like you’re using a very old image, v0.1.1 (??) which would be more than a year old. You need to upgrade.

1 Like

thanks , i updated the image to the stable build , still no joy for now

On connect called
Traceback (most recent call last):
  File "detect_objects.py", line 348, in <module>
    main()
  File "detect_objects.py", line 175, in main
    ffmpeg_input = get_ffmpeg_input(ffmpeg['input'])
TypeError: 'NoneType' object is not subscriptable

input needs to be indented one level under ffmpeg

1 Like

thanks after fixing it and including the ffmpeg flags for rtsp , i can see the image being processed :smiley:

its 5am here, lol will do some more testing during the day :slight_smile:

After spending hours reading all the replies (while my Coral was shipping) I have started this on CPU only and works great (if a bit slow). Only issue is with one camera, but that’s Foscam cameras for you. Great work, @blakeblackshear!

I’ve now plugged the Coral USB in, but am getting the below error:

Starting process for kitchen: 33
 * Serving Flask app "detect_objects" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
No EdgeTPU detected. Falling back to CPU.

Previously the bottom line would be first, but it hangs for about 5 seconds then shows the Coral isn’t detected. I’ve tested this directly with Tensorflow and the Coral is detected.

Anyone able to help, or point me in the right direction to look at the logs to see what is happening? I can’t see anyone else having the same issue!

Thanks

You are certain nothing else is using the Coral at the same time? When you tested directly with tensorflow, was it in the container?