What do you see in your mjpeg feed?
Thatās what I mean. You should see frigate detecting objects there in real time. If not, please provide all the information requested in the bug report template: https://github.com/blakeblackshear/frigate/issues/new?assignees=&labels=&template=bug_report.md&title=
Hi guys
Before I go down this rabbit hole could get some help.
I own coax hikvision cameras hooked up to an nvr unit, Iām able to view the camera in home assistant.
I also run home assistant OS on a RPI 4 (4gb)
Could I add the frigate add-on to my rpi 4 and have it monitor one of my cameras?
Anyone else see this before:
This is the second time Iāve seen it happen in the two weeks or so that Iāve been running frigate. Iām running it in docker on an rpi4 running 32 bit raspberry pi OS. So strange. Same thing each time: linear fill of disk until the disk shows full. I havenāt been able to find what is actually taking up this space yet. Just thought Iād check in to see if others have experienced it too.
The last time this happened (last week) i ended up having to wipe the drive and start over.
Do you have save_clips enabled? Do you have any objects in view that are constantly being tracked?
I do have save_clips enabled, but Iām not sure if something is constantly being tracked - that would make sense. Iāve only got it targeting people, but maybe itās locking onto something it thinks is a person. The problem is I canāt even get into the thing once the drive fills up. Not through vnc or ssh.
Iāll check the history of the entities. Maybe one would show the constant detection?
Edit: nothing stands out in the rest sensor history.
How much logging is the container generating? Docker logs could be the issue too.
Hi,
I am using frigate with five dahua rtsp cameras. The resolution is 1920x1080.
I tried to rescale the size to keep good performances without succes. I use a Coral pcie card.
Is there any working configuration of scaling down the frames ?
Kind regards,
I configure the substream on the dahua cameras directly. Scaling down the image within ffmpeg inside of frigate will have a limited impact on resource utilization. Can you not setup a substream?
Hi all,
Has anyone yet succeeded in taking video clips of detected events and sending them in notifications? If so, could you share your config?
I made it work with Node-Red but it stopped working after the recent home assistant update. I am using āevents/endā mqtt topic to trigger Telegram notification. The topic gives you the clip name and you can use it to reference the actual file:
{
"caption": "Here is the video",
"file": "/media/usbDrive/frigate/Clips/garage-{{payload.id}}.mp4"
}```
After the HA update, now I am seeing following error:
Log Details (ERROR)
Logger: homeassistant.components.telegram_bot
Source: components/telegram_bot/init.py:683
Integration: Telegram bot (documentation, issues)
First occurred: October 28, 2020, 4:07:01 PM (31 occurrences)
Last logged: 7:57:48 AM
- Canāt send file with kwargs: {ācaptionā: āHere is the videoā, āfileā: ā/media/usbDrive/frigate/Clips/garage-1603981834.145364-fca8dw.mp4ā}
- Canāt send file with kwargs: {ācaptionā: āHere is the videoā, āfileā: ā/media/usbDrive/frigate/Clips/back-1603981843.217943-xtvl1h.mp4ā}
Hi,
could someone please help me understand the following error, or what Iām doing wrong
Fontconfig error: Cannot load default config file
ffprobe -v panic -show_error -show_streams -of json "rtsp://username:Password@ip:port/ISAPI/Streaming/channels/301/picture"
Starting detection process: 16
{'error': {'code': -1094995529, 'string': 'Invalid data found when processing input'}}
Traceback (most recent call last):
File "detect_objects.py", line 441, in <module>
main()
File "detect_objects.py", line 235, in main
frame_shape = get_frame_shape(ffmpeg_input)
File "/opt/frigate/frigate/video.py", line 40, in get_frame_shape
video_info = [s for s in info['streams'] if s['codec_type'] == 'video'][0]
KeyError: 'streams'
Try running that ffprobe command from another machine. It is not able to connect to your camera from that url.
Hi Blake, RPi4, 32-bit OS.
Hmmm
Did you increase the memory available to the GPU in raspi-config?
How many seconds did you set for saveclips pre capture?
Iām only sending snapshots now but would like to implement sending videos too, just worried that it might overwhelm my telegram with too many videos.
Could you share your experience on sending videos with the āevent/endā event, useful? Any issues?
I am only using 5sec pre-capture. The 10-15 sec clips are pretty small size too. I am using zone to trigger any telegram alerts to prevent notifications for object I donāt care about - like people and cars on the street but want notification if anyone crosses the zone. I am also limiting notification for each mqtt topic (ie: only 1 notification every minute)
for my purpose itās working pretty good but if you have hi traffic, I imagine you may get multiple clips and my be annoying. I am still testing and I may just use the Home Assistantās media browser to view the clips which you can also cast to Chromecast now and only send snapshot via telegram.
hi,
any idea to avoid false positive like this? it just a piece of cloth on the floorā¦ lol