Local realtime person detection for RTSP cameras

My apologies Blake. Seems the options were not saved. Now I get a different error (which I will pursue on my own - at least for a little while):
[rtsp @ 0x5573e24ab9c0] Could not find codec parameters for stream 0 (Video: none, none): unknown codec
Consider increasing the value for the ‘analyzeduration’ and ‘probesize’ options
Input #0, rtsp, from ‘rtsp://192.168.0.13:8554/unicast’:
Metadata:
title : LIVE555 Streaming Media v2017.10.28
comment : LIVE555 Streaming Media v2017.10.28
Duration: 00:00:20.00, start: 1604263938.917900, bitrate: N/A
Stream #0:0: Video: none, none, 90k tbr, 90k tbn, 90k tbc
Stream mapping:
Stream #0:0 -> #0:0 (? (?) -> rawvideo (native))
Decoder (codec none) not found for input stream #0:0
birdcam: ffmpeg sent a broken frame. something is wrong.
birdcam: ffmpeg process is not running. exiting capture thread…

Thanks for the prior messages and help.

I am now using this https://pypi.org/project/simple-photo-gallery/
To generate periodically a nice gallery of videos.
Plus a scripy by pbruckner to keep just a set of latest files

I’m getting overlapping video clips that are almost duplicates of each other. It seems to be a situation when there is motion that is coming in and out of view or multiple people walking into view. I’m assuming “motion detected” would have to false before another clip is recorded but maybe I have some corner case here.

Any thoughts or need any data from me?

That’s expected. A clip is recorded for each detected object. Multiple people will result in multiple clips.

Hi,

Thank you for the suggestion. My cams was behind an NVR and I tought I was not able to get the substring but in reality I could.
Now everything run smoothly. BUT the substream dosen’t have the good resolution (4:3 where the stream is 16:9).
How to resize the substream in the options ? Do you have an example ?

Kind regards;

I haven’t tried it in the E Key slot yet, I had a surgery done on my arm so it could be a while.

Did you install the Gasket DKMS module? Do you see /dev/apex_0? I don’t know how to install that module in hassos but it must be possible. I don’t run it on hassos or even with Docker at all, I use a VM.

Did you followed the installation procedure?

You need to have the linux module loaded and the /dev/apex_X present.

Kind regards,

Is there any way to train an image such as the USPS mail truck to alert when the mail arrives?

1 Like

@rpress
Hope the recovery goes well.

@rpress @bdherouville
I was initially under the impression the Docker container contained all the drivers.
But I was reading that link late last night and realised I don’t have an apex_0 listed.

I will try and install the driver at the container level first, as it will be a little difficult to install at the host level I think.

I fear that you will need to install the driver at least on the host. Because without that wou won’t be able to present /dev/apex_0 to the docker container.
If the host does not have access to /dev/apex_0 it won’t be able to give Docker acess to it.

Kind regards,

I am having issues getting frigate running on my system. I’ve tried both the HA addon in Supervisor as well as installing as a standalone docker container.

When I run in docker, the container starts, but after only a second it ends. There is nothing in the log but I do see “Stopped with exit code 132”. My Googling about exit code 132 hasn’t returned anything fruitful.

When I try to run the HA addon through Supervisor I get the error:
Fontconfig error: Cannot load default config file
Traceback (most recent call last):

File “detect_objects.py”, line 441, in
“<“module”>”
main()
File “detect_objects.py”, line 202, in
main
ffmpeg_output_args = ["-r",
str(config.get(‘fps’))] +
ffmpeg_output_args
TypeError: can only concatenate list (not
“NoneType”) to list

My rtsp url does work successfully in VLC.

Any insight would be greatly appreciated.

web_port: 5000

tensorflow_device: usb

mqtt:
  host: MQTT_IP
  topic_prefix: frigate
  user: user
  password: password

################
# Global configuration for saving clips
################
save_clips:
  ###########
  # Maximum length of time to retain video during long events.
  # If an object is being tracked for longer than this amount of time, the cache  
  # will begin to expire and the resulting clip will be the last x seconds of the event.
  ###########
  max_seconds: 300

#################
# Default ffmpeg args. Optional and can be overwritten per camera.
# Should work with most RTSP cameras that send h264 video
# Built from the properties below with:
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
#################
# ffmpeg:
#   global_args:
#     - -hide_banner
#     - -loglevel
#     - panic
#   hwaccel_args: []
#   input_args:
#     - -avoid_negative_ts
#     - make_zero
#     - -fflags
#     - nobuffer
#     - -flags
#     - low_delay
#     - -strict
#     - experimental
#     - -fflags
#     - +genpts+discardcorrupt
#     - -vsync
#     - drop
#     - -rtsp_transport
#     - tcp
#     - -stimeout
#     - '5000000'
#     - -use_wallclock_as_timestamps
#     - '1'
#   output_args:
#     - -f
#     - rawvideo
#     - -pix_fmt
#     - rgb24


objects:
  track:
    - person
    - car
    - truck
  filters:
    person:
      min_area: 5000
      max_area: 100000
      min_score: 0.5
      threshold: 0.85


cameras:
  porch:
    ffmpeg:

      input: rtsp://user:password@CAMERA_IP:554/cam/realmonitor?channel=1&subtype=1

    height: 480
    width: 640

    fps: 5


    take_frame: 1
    best_image_timeout: 60



    save_clips:
      enabled: true
      #########
      # Number of seconds before the event to include in the clips
      #########
      pre_capture: 30
      #########
      # Objects to save clips for. Defaults to all tracked object types.
      #########
      objects:
        - person      

    ################
    # Configuration for the snapshots in the debug view and mqtt
    ################
    snapshots:
      show_timestamp: True
      draw_zones: False

    objects:
      track:
        - person
      filters:
        person:
          min_area: 5000
          max_area: 100000
          min_score: 0.5
          threshold: 0.5

Can you double check that this is the config you used with the addon when you got the error message? I know that error message, but your config doesn’t have the problem that usually causes it.

I am using HA core in a docker container. I have mapped the clips directory successfully to the media dir in HA. How can i see the files using the media browser? is there a card or something that i need to setup, so it shows in lovelace? can you share how you add that media window? thanks

What version of HA are you using? The media browser link is at the left panel.

thanks, i found it a few hours ago, i feel like an idiot now :slight_smile: too obvious, but somehow is not in any HA documentation

So far I’m impressed by Frigate, simple to use and really powerful. But I’m stuck on a problem to use the hw accelerator. I’m on a NUC i3 10th gen, so according to the docs I need to set the LIBVA_DRIVER_NAME env variable but I’m using the hassio addon. How do I do this? Or should I switch to Portainer and handle is myself?

I might be able to find a way to auto-detect it in a future version. Honestly, I didn’t see that big of an improvement in CPU usage with that driver. It will require more research to see if it can be improved.

Nah, it’s not to get a better performance, the other driver doesn’t seem supported by this cpu. As I’m hitting this error at startup:

[AVHWDeviceContext @ 0x558072c0ff00] libva: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
[AVHWDeviceContext @ 0x558072c0ff00] Failed to initialise VAAPI connection: -1 (unknown libva error).
Device creation failed: -5.
[h264 @ 0x558072bc7600] No device available for decoder: device type vaapi needed for codec h264.

I meant performance over not using hardware acceleration at all, not in comparison to the other driver. It only saved me ~20% over no hardware acceleration.

1 Like

@blakeblackshear, really thankful for this add-on. Already ordered a coral!
Same motivations as yours with nest-like capabilities. I’ve also trained a raccoon detection model on IP camera night vision footage on TensorFlow, which works. I’ll need to convert it to TensorFlow lite to integrate here.

Got the add-on up and running with 1 CPU detector and 1 IP camera and debug stream works great. I see the detection in MQTT.

1 Like