If you can’t manually fill out this url and have it working from any device then you need to troubleshoot that before you test the notifications.
https://*NABUCASAURL*.ui.nabu.casa/api/frigate/notifications/{{payload["after"]["id"]}}/snapshot.jpg
If you can’t manually fill out this url and have it working from any device then you need to troubleshoot that before you test the notifications.
https://*NABUCASAURL*.ui.nabu.casa/api/frigate/notifications/{{payload["after"]["id"]}}/snapshot.jpg
I think that’s my issue. I looked at the media section and she told thru the folders in Frigate and noticed that the image url was slightly different then what was suggested. I’ll have to get the url and post it in later maybe see why the URL would be different.
Hi,
I’m running frigate as an add-on. I have a “nuc” with a 10th gen intel i5 processor and TPU. Home assistant supervised on Debian 11.
I currently have one Annke c500 camera, outputting a mainstream in 1080p, variable bitrate, 15fps, i-frame 15, h264. I also have a sub-stream 640x480, 8 fps, and h264.
I sometimes get weird artifacts in my recordings, similar to the attached image. Sometimes the image flickers and the object, for example, a person jumps back and forth between a couple of frames. Imagine the person would flicker between the two positions on the image.
Also sometimes the video freezes.
Any ideas why this happens? Is it related to Frigate or should I start searching somewhere else?
Maybe this is proof that we are living in simulation
I got it working somehow it just started sending me the notification with images.
Now my next issue. still using the Frigate blueprint the notification come with see snapshot which does bring up the image of the detected motion. I also see show clip. when selected I get the error message nothing found. Looking in the media section under Frigate>Clips nothing in in this folder. Snapshots and recordings are found and recordings are broken up into hour segments.
I have attached my code for validation. I would like to have a clip of the recorded image a few seconds before and after the detected motion.
host: {localhost_IP}
user: {MQTT_username}
password: {MQTT_password}
timestamp_style:
format: "%m/%d/%Y %H:%M:%S"
color:
red: 255
green: 255
blue: 255
thickness: 2
effect: shadow
cameras:
front_camera:
ffmpeg:
hwaccel_args:
- -hwaccel
- qsv
- -qsv_device
- /dev/dri/renderD128
inputs:
- path: rtsp://admin:[email protected]:554/h264Preview_01_sub
roles:
- detect
- path: rtsp://admin:[email protected]:554/h264Preview_01_main
roles:
- record
- rtmp
detect:
width: 2560
height: 1920
fps: 5
objects:
track:
- peson
- car
- cat
- dog
- bear
filters:
person:
threshold: 0.6
car:
mask:
- 0,0,1024,0,875,74,660,215,435,361,349,413,0,727
threshold: 0.6
snapshots:
enabled: true
timestamp: true
bounding_box: true
required_zones:
- driveway
- front_walkway
crop: True
height: 500
retain:
default: 3
zones:
driveway:
coordinates: 2504,1920,2560,145,104,748,107,1920
objects:
- car
- person
- cat
- dog
- bear
front_walkway:
coordinates: 1449,0,1157,403,1629,316,1786,0
objects:
- person
- cat
- dog
- bear
record:
enabled: True
retain:
days: 0
events:
retain:
default: 5
mode: active_objects
required_zones:
- driveway
- front_walkway
living_room_camera:
ffmpeg:
hwaccel_args:
- -hwaccel
- qsv
- -qsv_device
- /dev/dri/renderD128
inputs:
- path: rtsp://admin:[email protected]/h264Preview_01_sub
roles:
- detect
- path: rtsp://admin:[email protected]:554/h264Preview_01_main
roles:
- record
- rtmp
detect:
width: 640
height: 480
fps: 5
objects:
track:
- person
- cat
- dog
snapshots:
enabled: true
timestamp: true
bounding_box: true
required_zones:
- livingroom
crop: True
height: 500
retain:
default: 3
zones:
livingroom:
coordinates: 0,480,0,149,640,106,640,480
objects:
- person
- cat
- dog
record:
enabled: True
retain:
days: 0
events:
retain:
default: 5
mode: active_objects
required_zones:
- livingroom
pre_capture: 5
post_capture: 15
front_door:
ffmpeg:
hwaccel_args:
- -hwaccel
- qsv
- -qsv_device
- /dev/dri/renderD128
inputs:
- path: rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=01&authbasic=64
roles:
- detect
- path: rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0&authbasic=64
roles:
- record
- rtmp
detect:
width: 720
height: 576
fps: 5
motion:
mask:
- 0,0,720,25,720,298,0,292
objects:
track:
- person
- cat
- dog
- car
- bear
filters:
person:
threshold: 0.6
car:
mask:
- 720,318,720,333,509,349,282,353,165,361,0,351,0,330
threshold: 0.6
snapshots:
enabled: true
timestamp: true
bounding_box: true
required_zones:
- street_car
- sidewalk_person
- frontyard
crop: True
height: 500
retain:
default: 3
zones:
street_car:
coordinates: 720,298,720,347,0,375,0,323
objects:
- car
sidewalk_person:
coordinates: 0,337,720,322,720,351,0,365
objects:
- person
frontyard:
coordinates: 720,576,0,576,0,403,162,379,720,361
objects:
- person
- cat
- dog
- bear
record:
enabled: True
retain:
days: 0
events:
retain:
default: 5
mode: active_objects
required_zones:
- street_car
- sidewalk_person
- frontyard
detectors:
cpu1:
type: cpu
cpu2:
type: cpu
cpu3:
type: cpu
What kind of hardware would be required to host a frigate instance capable of handling a very large collection of cameras? Think 100+, or would it be better to have a server VM per frigate instance / per residence (8-10 cameras)?
There is a company here in South Africa which sells a commercial box which performs similar to frigate, but they charge an obscene amount of money, I want to work with security providers here to supply this server for free, so people who need security the most (people who can’t afford these expensive solutions) can have access to this tech.
You want a single system to do over 100 cameras and want to provide this for free?
You might be able to do this computationally, but this is likely going to be a very expensive system.
I would say that your idea of VMs for each unit would be a good way to go design wise if you choose a traditional server appliance. As for the specs of the computer…lots of RAM, lots of cores, multiple video cards to decode that many streams, AI cards to help with the inference.
I haven’t kept up with frigate close enough to know if they ever got inference working on the GPU like deepstack and others…last I knew it was only video decode that a GPU helped with.
On the other hand…
I wonder what synology surveillance station would cost with 100 licenses and how many (NAS devices) you would need to do 100 cameras. Likely cheaper and better supported in the long run.
From time to time my system gets into a point where I get repeated notification of same person detection to my telegram account. I get same detection every minute. For now I found out that two things stops this recurring notification:
Any idea what causing this recurring event?
I think it happened when event is in progress status and because of it notifications are triggered multiple times. It happened to me as well, but how to avoid it or how to change automation I don’t know.
Somtime for me happens that the siren that i have on frigate automation will trigger out of nowhere. When i check what did trigger it and i see box with nobody inside and written person. It is triggering until i restart frigate. I’ve got used to it and just restart frigate time to time…
Do you know if this was added ? Keen to order a coral but Iim on dsm7 (running home assistant super on vmm and Frigate add on) .
I use this branch on my ds220+ to install frigate in docker.
My HA instance is on a seperate NUC. And the add on pulls in the remote frigate data.
Sweet. It didn’t take me long to setup frigate and I could set it up faster again now. If coral doesn’t work in my 1019+ dsm7 I’ll move frigate to docker and use this branch. Thanks for the info
Yeah, as far as I know this hasn’t been added to the main add on release. Probably could be at some point.
Is there a way to disable all cameras (and not by stoping frigate)?
I am using privacy mask from cam manufacturer
cam9pm:
command_on: 'curl --digest --globoff "http://LOGIN:PASSWORD@CAMIP/cgi-bin/configManager.cgi?action=setConfig&VideoWidget[0].Covers[0].EncodeBlend=true"'
command_off: 'curl --digest --globoff "http://LOGIN:PASSWORD@CAMIP/cgi-bin/configManager.cgi?action=setConfig&VideoWidget[0].Covers[0].EncodeBlend=false"'
command_state: 'curl -s --digest --globoff "http://LOGIN:PASSWORD@CAMIP/cgi-bin/configManager.cgi?action=getConfig&name=VideoWidget[0].Covers[0].EncodeBlend" | grep -q true'
Hi guys
Does anyone have any success with detecting cars. I would like to trigger a notification when one of the two cars leave or arrive?
Hello,
I have my HA and frigate running on docker on a weak NUC PC that is dedicated for that purpose.
When I enable detection it clogs up all of the CPU and I would like to avoid that without buying a Coral.
I also have a pretty powerful desktop PC with Windows, and my question is can I use the computing power of the desktop PC to do all of the AI jobs from my cameras?
Thanks
If you install frigate on the computer, sure.
I would also like to use the output of the AI for some automations in HA.
So just to map the heavy lifting to the PC… I hope it makes sense