Thanks for the pointers… I am getting the syntax wrong somewhere as getting this error:
Error parsing config: expected a dictionary for dictionary value @ data['cameras']['fri_side_door']['ffmpeg']['output_args']
Thanks
Thanks for the pointers… I am getting the syntax wrong somewhere as getting this error:
Error parsing config: expected a dictionary for dictionary value @ data['cameras']['fri_side_door']['ffmpeg']['output_args']
Thanks
Look at the updated docs for how to specify output args. That example is based on an old version of frigate.
Ahhhh, this is the new format…
output_args:
rtmp: -c:v libx264 -an -f flv
I add my values to this line and don’t need the other values. Do I replace what is there with the transpose option only?
Thanks for pointing me at this.
You can’t just replace the output args for any roles. They need to be modified or added to. Rotating like this for anything other than detect will likely result in a substantial increase in CPU load. I don’t think you can simply rotate a video feed without reencoding.
Thanks, I will take a look at this. I guess the ideal solution is to rotate within the camera software, unfortunatley the cams I have don’t have this as an option…
I’m not running Frigate on a pi, so have plenty of CPU to play with and am looking to replace Zoneminder, which is currently running on the same box… Even if things reencode I am hoping for a more efficient net result.
I will start with detect and see how I get on, will post results back to hopefully help anyone else and provide you feedback of my progress.
Thanks again.
I can offer broad feedback on zm to frigate migration.
I was running 6 cameras in zm with nvidia/cudda opencv. All working well cpu hovering at around 35% until there was object detection and it would jump to 70%
This is a ryzen 3900x with 64gb ram. bla, bla.
Whilst the zm eventnotification server says it can use the coral; i could never get it to work.
Migrated all to frigate. The coral off-course works out of the box.
CPU stays at around 15% always and inference is just a fraction of zm.
For face recognition tried the deepstack integration and got it to working acceptably, but then came across this double-take.
works really well with frigate and makes managing/training deepstack very easy. Additionally he just added self-hosted notifications via Gotify.
That’s encouraging. When I can rotate 2 of my cams, I will then be able to switch off zoneminder, and enjoy the tight integration between Frigate and Homeassistant. As you say there are then the further options to join to other clever integrations - doubletake etc…
I should get time tonight to play with the output_args and hopefully get this thing nailed!
Thanks again.
Hey everybody, I’m using Frigate for a few weeks now with three Reolink cameras on HomeAssistant OS (HaOS). I purchased a Google Coral accelerator with USB but am not able to get this to work.
I’ve seen release notes of HaOS 6 that PCIe Coral devices are supported by now:
I upgrade to the most recent dev branch but still cannot get this to work .
Can someone verify that Coral USB is also supported?
Thank you very much in advance!
Edit:
I’m using this in my frigate.yml:
detectors:
coral:
type: edgetpu
device: usb
Once Frigate starts, it throws this exception:
frigate.edgetpu INFO : No EdgeTPU detected.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
delegate = Delegate(library, options)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in __init__
raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
File "/opt/frigate/frigate/edgetpu.py", line 63, in __init__
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0
I added the following to my camera config for one camera:
output_args:
record: -vf transpose=1 -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
clips: -vf transpose=1 -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
rtmp: -vf transpose=1 -c:v libx264 -an -f flv
My system details:
H/W path Device Class Description
==================================================
system Computer
/0 bus Motherboard
/0/0 memory 23GiB System memory
/0/1 processor Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
And my cpu usage for ffmpeg did this:
SHR S %CPU %MEM TIME+ COMMAND
58257 root 20 0 2528204 1.162g 16660 S 163.1 4.9 1:25.18 ffmpeg
My processor goes to <150%!!
I will try dropping the transpose from some of the roles, perhaps just add it to clips may reduce things…
More info, its not the transpose arg that blasts the processor, it goes to 150% with the defaults:
output_args:
record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
clips: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
rtmp: -c:v libx264 -an -f flv
Any suggestions?
Many thanks
Bad interpretation of the docs from me. Those are the settings for mjpeg cams.
Tried the following but get errors:
fri_side_door:
ffmpeg:
inputs:
- path: rtsp://viewer:[email protected]:554/user=admin_password=redacted_channel=1_stream=0
roles:
- detect
- rtmp
- clips
output_args:
# record: -vf transpose=1 -f rawvideo -pix_fmt -yuv420p
# clips: -vf transpose=1 -f -rawvideo -pix_fmt yuv420p
rtmp: -vf transpose=1 -f -rawvideo -pix_fmt yuv420p
Here are the errors:
frigate | frigate.edgetpu INFO : TPU found
frigate | frigate.video INFO : fri_side_door: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate | frigate.video INFO : fri_side_door: ffmpeg process is not running. exiting capture thread...
frigate | ffmpeg.fri_side_door.detect ERROR : [NULL @ 0x5555c848b640] Unable to find a suitable output format for 'rtmp://127.0.0.1/live/fri_side_door'
frigate | ffmpeg.fri_side_door.detect ERROR : rtmp://127.0.0.1/live/fri_side_door: Invalid argument
frigate | frigate.video INFO : fri_side_door: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate | frigate.video INFO : fri_side_door: ffmpeg process is not running. exiting capture thread...
frigate | ffmpeg.fri_side_door.detect ERROR : [NULL @ 0x555c7db0f040] Unable to find a suitable output format for 'rtmp://127.0.0.1/live/fri_side_door'
frigate | ffmpeg.fri_side_door.detect ERROR : rtmp://127.0.0.1/live/fri_side_door: Invalid argument
frigate | frigate.video INFO : fri_side_door: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate | frigate.video INFO : fri_side_door: ffmpeg process is not running. exiting capture thread...
frigate | ffmpeg.fri_side_door.detect ERROR : [NULL @ 0x561fd01cf040] Unable to find a suitable output format for 'rtmp://127.0.0.1/live/fri_side_door'
frigate | ffmpeg.fri_side_door.detect ERROR : rtmp://127.0.0.1/live/fri_side_door: Invalid argument
Thanks
Stupid/noob question but…
How the heck do I save a MQTT image instead of using http://frigate…/snapshot.jpg ?
My Frigate instance is crashing several times a day. I’ve now 10 cams enabled and pulling snapshots from 3-5 of them at the same time makes Frigate go scuba diving.
I’m using the downloader.download_file service to grab a snapshot from frigate, but as I understand the MQTT way is better.
MQTT is not better. There is no way to connect that snapshot with an event id for notifications. Is this the database connection already opened issue?
I know that the mqtt image is not the ideal solution for notifications. But had some time over to mess with my automations.
And yes. Database conn issue.
Dear all,
I dunno if already discussed here but I use a blueprint
to send notifications to my Android app. When trying to acces the clips sometimes I get an 404 not found error for this clip. Anyone else facing this issue?
Perhaps the clip is not saved yet. I saw an addition coming in version 9.0, which adds clip_ready to the events endpoint. Then it will be possible to send the notification, when the clip is ready.
any chance that we also get hwaccel support for NVIDIA jetson?
just moved my HA to jetson due performance issues. totatlly fine and stable but would like to get
cameras:
cam_wz:
ffmpeg:
hwaccel_args:
- h264_nvv4l2dec
h264_nvv4l2dec should be correct.
thanks
sounds fair but even when i wait a bit the 404 error persists well will wait then for next release … thanks for the info Some clips are well shown
This is a pain.
Check this link to verify the board is accessible
Coral
On another site I read comments about a faulty cable.
I had a similar problem with zoneminder in docker. I have a pcie coral and the problem was permissions for /dev/apex_0 it needs full privileges.
Does the data for things like inference, motion, fps, last detected object, etc come from the Frigate integration or MQTT? I’ve got frigate running as a pod in my kubernetes cluster, and occasionally get into this state where I see inference, but a lot of the other values are “Unavailable”. Trying to figure out where it’s pulling from and where I can try to debug where it’s failing.