Local realtime person detection for RTSP cameras

What do you see in your mjpeg feed?

when i goto: http://ip:5000/voordeur i see the camera image. Is that what you mean?

Thatā€™s what I mean. You should see frigate detecting objects there in real time. If not, please provide all the information requested in the bug report template: https://github.com/blakeblackshear/frigate/issues/new?assignees=&labels=&template=bug_report.md&title=

Hi guys
Before I go down this rabbit hole could get some help.

I own coax hikvision cameras hooked up to an nvr unit, Iā€™m able to view the camera in home assistant.
I also run home assistant OS on a RPI 4 (4gb)

Could I add the frigate add-on to my rpi 4 and have it monitor one of my cameras?

Anyone else see this before:
Screen Shot 2020-10-28 at 7.59.35 PM

This is the second time Iā€™ve seen it happen in the two weeks or so that Iā€™ve been running frigate. Iā€™m running it in docker on an rpi4 running 32 bit raspberry pi OS. So strange. Same thing each time: linear fill of disk until the disk shows full. I havenā€™t been able to find what is actually taking up this space yet. Just thought Iā€™d check in to see if others have experienced it too.

The last time this happened (last week) i ended up having to wipe the drive and start over.

Do you have save_clips enabled? Do you have any objects in view that are constantly being tracked?

I do have save_clips enabled, but Iā€™m not sure if something is constantly being tracked - that would make sense. Iā€™ve only got it targeting people, but maybe itā€™s locking onto something it thinks is a person. The problem is I canā€™t even get into the thing once the drive fills up. Not through vnc or ssh.

Iā€™ll check the history of the entities. Maybe one would show the constant detection?

Edit: nothing stands out in the rest sensor history.

How much logging is the container generating? Docker logs could be the issue too.

Hi,

I am using frigate with five dahua rtsp cameras. The resolution is 1920x1080.
I tried to rescale the size to keep good performances without succes. I use a Coral pcie card.
Is there any working configuration of scaling down the frames ?

Kind regards,

I configure the substream on the dahua cameras directly. Scaling down the image within ffmpeg inside of frigate will have a limited impact on resource utilization. Can you not setup a substream?

1 Like

@mr-onion are you running on a RPi4? or a RPi3? The newer images are working for me.

Hi all,

Has anyone yet succeeded in taking video clips of detected events and sending them in notifications? If so, could you share your config?

I made it work with Node-Red but it stopped working after the recent home assistant update. I am using ā€œevents/endā€ mqtt topic to trigger Telegram notification. The topic gives you the clip name and you can use it to reference the actual file:

{
    "caption": "Here is the video",
    "file": "/media/usbDrive/frigate/Clips/garage-{{payload.id}}.mp4"
}```

After the HA update, now I am seeing following error:

Log Details (ERROR)

Logger: homeassistant.components.telegram_bot
Source: components/telegram_bot/init.py:683
Integration: Telegram bot (documentation, issues)
First occurred: October 28, 2020, 4:07:01 PM (31 occurrences)
Last logged: 7:57:48 AM

  • Canā€™t send file with kwargs: {ā€˜captionā€™: ā€˜Here is the videoā€™, ā€˜fileā€™: ā€˜/media/usbDrive/frigate/Clips/garage-1603981834.145364-fca8dw.mp4ā€™}
  • Canā€™t send file with kwargs: {ā€˜captionā€™: ā€˜Here is the videoā€™, ā€˜fileā€™: ā€˜/media/usbDrive/frigate/Clips/back-1603981843.217943-xtvl1h.mp4ā€™}

Hi,

could someone please help me understand the following error, or what Iā€™m doing wrong

Fontconfig error: Cannot load default config file
ffprobe -v panic -show_error -show_streams -of json "rtsp://username:Password@ip:port/ISAPI/Streaming/channels/301/picture"
Starting detection process: 16
{'error': {'code': -1094995529, 'string': 'Invalid data found when processing input'}}
Traceback (most recent call last):
  File "detect_objects.py", line 441, in <module>
    main()
  File "detect_objects.py", line 235, in main
    frame_shape = get_frame_shape(ffmpeg_input)
  File "/opt/frigate/frigate/video.py", line 40, in get_frame_shape
    video_info = [s for s in info['streams'] if s['codec_type'] == 'video'][0]
KeyError: 'streams'

Try running that ffprobe command from another machine. It is not able to connect to your camera from that url.

Hi Blake, RPi4, 32-bit OS.
Hmmm

Did you increase the memory available to the GPU in raspi-config?

How many seconds did you set for saveclips pre capture?

Iā€™m only sending snapshots now but would like to implement sending videos too, just worried that it might overwhelm my telegram with too many videos.

Could you share your experience on sending videos with the ā€œevent/endā€ event, useful? Any issues?

1 Like

I am only using 5sec pre-capture. The 10-15 sec clips are pretty small size too. I am using zone to trigger any telegram alerts to prevent notifications for object I donā€™t care about - like people and cars on the street but want notification if anyone crosses the zone. I am also limiting notification for each mqtt topic (ie: only 1 notification every minute) image

for my purpose itā€™s working pretty good but if you have hi traffic, I imagine you may get multiple clips and my be annoying. I am still testing and I may just use the Home Assistantā€™s media browser to view the clips which you can also cast to Chromecast now and only send snapshot via telegram.

hi,

any idea to avoid false positive like this? it just a piece of cloth on the floorā€¦ lol

image